linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* remove bi_phys_segments and related cleanups
@ 2019-06-06 10:28 Christoph Hellwig
  2019-06-06 10:28 ` [PATCH 1/6] block: initialize the write priority in blk_rq_bio_prep Christoph Hellwig
                   ` (7 more replies)
  0 siblings, 8 replies; 32+ messages in thread
From: Christoph Hellwig @ 2019-06-06 10:28 UTC (permalink / raw)
  To: axboe; +Cc: Matias Bjorling, linux-block

Hi Jens,

this series removes the bi_phys_segments member in struct bio
and cleans up various areas around it.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 1/6] block: initialize the write priority in blk_rq_bio_prep
  2019-06-06 10:28 remove bi_phys_segments and related cleanups Christoph Hellwig
@ 2019-06-06 10:28 ` Christoph Hellwig
  2019-06-07  5:58   ` Hannes Reinecke
                     ` (2 more replies)
  2019-06-06 10:29 ` [PATCH 2/6] block: remove blk_init_request_from_bio Christoph Hellwig
                   ` (6 subsequent siblings)
  7 siblings, 3 replies; 32+ messages in thread
From: Christoph Hellwig @ 2019-06-06 10:28 UTC (permalink / raw)
  To: axboe; +Cc: Matias Bjorling, linux-block, Chaitanya Kulkarni

The priority field also makes sense for passthrough requests, so
initialize it in blk_rq_bio_prep.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 block/blk-core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index ee1b35fe8572..9b88b1a3eb43 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -680,7 +680,6 @@ void blk_init_request_from_bio(struct request *req, struct bio *bio)
 		req->cmd_flags |= REQ_FAILFAST_MASK;
 
 	req->__sector = bio->bi_iter.bi_sector;
-	req->ioprio = bio_prio(bio);
 	req->write_hint = bio->bi_write_hint;
 	blk_rq_bio_prep(req->q, req, bio);
 }
@@ -1436,6 +1435,7 @@ void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
 
 	rq->__data_len = bio->bi_iter.bi_size;
 	rq->bio = rq->biotail = bio;
+	rq->ioprio = bio_prio(bio);
 
 	if (bio->bi_disk)
 		rq->rq_disk = bio->bi_disk;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 2/6] block: remove blk_init_request_from_bio
  2019-06-06 10:28 remove bi_phys_segments and related cleanups Christoph Hellwig
  2019-06-06 10:28 ` [PATCH 1/6] block: initialize the write priority in blk_rq_bio_prep Christoph Hellwig
@ 2019-06-06 10:29 ` Christoph Hellwig
  2019-06-07  5:59   ` Hannes Reinecke
                     ` (3 more replies)
  2019-06-06 10:29 ` [PATCH 3/6] block: remove the bi_phys_segments field in struct bio Christoph Hellwig
                   ` (5 subsequent siblings)
  7 siblings, 4 replies; 32+ messages in thread
From: Christoph Hellwig @ 2019-06-06 10:29 UTC (permalink / raw)
  To: axboe; +Cc: Matias Bjorling, linux-block

lightnvm should have never used this function, as it is sending
passthrough requests, so switch it to blk_rq_append_bio like all the
other passthrough request users.  Inline blk_init_request_from_bio into
the only remaining caller.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-core.c             | 11 -----------
 block/blk-mq.c               |  7 ++++++-
 drivers/nvme/host/lightnvm.c |  2 +-
 include/linux/blkdev.h       |  1 -
 4 files changed, 7 insertions(+), 14 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 9b88b1a3eb43..a9cb465c7ef7 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -674,17 +674,6 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio,
 	return false;
 }
 
-void blk_init_request_from_bio(struct request *req, struct bio *bio)
-{
-	if (bio->bi_opf & REQ_RAHEAD)
-		req->cmd_flags |= REQ_FAILFAST_MASK;
-
-	req->__sector = bio->bi_iter.bi_sector;
-	req->write_hint = bio->bi_write_hint;
-	blk_rq_bio_prep(req->q, req, bio);
-}
-EXPORT_SYMBOL_GPL(blk_init_request_from_bio);
-
 static void handle_bad_sector(struct bio *bio, sector_t maxsector)
 {
 	char b[BDEVNAME_SIZE];
diff --git a/block/blk-mq.c b/block/blk-mq.c
index ce0f5f4ede70..61457bffa55f 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1766,7 +1766,12 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
 
 static void blk_mq_bio_to_request(struct request *rq, struct bio *bio)
 {
-	blk_init_request_from_bio(rq, bio);
+	if (bio->bi_opf & REQ_RAHEAD)
+		rq->cmd_flags |= REQ_FAILFAST_MASK;
+
+	rq->__sector = bio->bi_iter.bi_sector;
+	rq->write_hint = bio->bi_write_hint;
+	blk_rq_bio_prep(rq->q, rq, bio);
 
 	blk_account_io_start(rq, true);
 }
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 4f20a10b39d3..ba009d4c9dfa 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -660,7 +660,7 @@ static struct request *nvme_nvm_alloc_request(struct request_queue *q,
 	rq->cmd_flags &= ~REQ_FAILFAST_DRIVER;
 
 	if (rqd->bio)
-		blk_init_request_from_bio(rq, rqd->bio);
+		blk_rq_append_bio(rq, &rqd->bio);
 	else
 		rq->ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, IOPRIO_NORM);
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 592669bcc536..c67a9510e532 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -828,7 +828,6 @@ extern void blk_unregister_queue(struct gendisk *disk);
 extern blk_qc_t generic_make_request(struct bio *bio);
 extern blk_qc_t direct_make_request(struct bio *bio);
 extern void blk_rq_init(struct request_queue *q, struct request *rq);
-extern void blk_init_request_from_bio(struct request *req, struct bio *bio);
 extern void blk_put_request(struct request *);
 extern struct request *blk_get_request(struct request_queue *, unsigned int op,
 				       blk_mq_req_flags_t flags);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 3/6] block: remove the bi_phys_segments field in struct bio
  2019-06-06 10:28 remove bi_phys_segments and related cleanups Christoph Hellwig
  2019-06-06 10:28 ` [PATCH 1/6] block: initialize the write priority in blk_rq_bio_prep Christoph Hellwig
  2019-06-06 10:29 ` [PATCH 2/6] block: remove blk_init_request_from_bio Christoph Hellwig
@ 2019-06-06 10:29 ` Christoph Hellwig
  2019-06-07  6:02   ` Hannes Reinecke
  2019-06-10 18:30   ` Bart Van Assche
  2019-06-06 10:29 ` [PATCH 4/6] block: simplify blk_recalc_rq_segments Christoph Hellwig
                   ` (4 subsequent siblings)
  7 siblings, 2 replies; 32+ messages in thread
From: Christoph Hellwig @ 2019-06-06 10:29 UTC (permalink / raw)
  To: axboe; +Cc: Matias Bjorling, linux-block

We only need the number of segments in the blk-mq submission path.
Remove the field from struct bio, and return it from a variant of
blk_queue_split instead of that it can passed as an argument to
those functions that need the value.

This also means we stop recounting segments except for cloning
and partial segments.

To keep the number of arguments in this how path down remove
pointless struct request_queue arguments from any of the functions
that had it and grew a nr_segs argument.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 Documentation/block/biodoc.txt |  1 -
 block/bfq-iosched.c            |  5 ++-
 block/bio.c                    | 15 +------
 block/blk-core.c               | 32 +++++++--------
 block/blk-map.c                | 10 ++++-
 block/blk-merge.c              | 75 ++++++++++++----------------------
 block/blk-mq-sched.c           | 26 +++++++-----
 block/blk-mq-sched.h           | 10 +++--
 block/blk-mq.c                 | 23 ++++++-----
 block/blk.h                    | 23 ++++++-----
 block/kyber-iosched.c          |  5 ++-
 block/mq-deadline.c            |  5 ++-
 drivers/md/raid5.c             |  1 -
 include/linux/bio.h            |  1 -
 include/linux/blk-mq.h         |  2 +-
 include/linux/blk_types.h      |  6 ---
 include/linux/blkdev.h         |  1 -
 include/linux/elevator.h       |  2 +-
 18 files changed, 106 insertions(+), 137 deletions(-)

diff --git a/Documentation/block/biodoc.txt b/Documentation/block/biodoc.txt
index ac18b488cb5e..31c177663ed5 100644
--- a/Documentation/block/biodoc.txt
+++ b/Documentation/block/biodoc.txt
@@ -436,7 +436,6 @@ struct bio {
        struct bvec_iter	bi_iter;	/* current index into bio_vec array */
 
        unsigned int	bi_size;     /* total size in bytes */
-       unsigned short 	bi_phys_segments; /* segments after physaddr coalesce*/
        unsigned short	bi_hw_segments; /* segments after DMA remapping */
        unsigned int	bi_max;	     /* max bio_vecs we can hold
                                         used as index into pool */
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index f8d430f88d25..a6bf842cbe16 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -2027,7 +2027,8 @@ static void bfq_remove_request(struct request_queue *q,
 
 }
 
-static bool bfq_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio)
+static bool bfq_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio,
+		unsigned int nr_segs)
 {
 	struct request_queue *q = hctx->queue;
 	struct bfq_data *bfqd = q->elevator->elevator_data;
@@ -2050,7 +2051,7 @@ static bool bfq_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio)
 		bfqd->bio_bfqq = NULL;
 	bfqd->bio_bic = bic;
 
-	ret = blk_mq_sched_try_merge(q, bio, &free);
+	ret = blk_mq_sched_try_merge(q, bio, nr_segs, &free);
 
 	if (free)
 		blk_mq_free_request(free);
diff --git a/block/bio.c b/block/bio.c
index 683cbb40f051..d550d36392e9 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -558,14 +558,6 @@ void bio_put(struct bio *bio)
 }
 EXPORT_SYMBOL(bio_put);
 
-int bio_phys_segments(struct request_queue *q, struct bio *bio)
-{
-	if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
-		blk_recount_segments(q, bio);
-
-	return bio->bi_phys_segments;
-}
-
 /**
  * 	__bio_clone_fast - clone a bio that shares the original bio's biovec
  * 	@bio: destination bio
@@ -739,7 +731,7 @@ static int __bio_add_pc_page(struct request_queue *q, struct bio *bio,
 	if (bio_full(bio))
 		return 0;
 
-	if (bio->bi_phys_segments >= queue_max_segments(q))
+	if (bio->bi_vcnt >= queue_max_segments(q))
 		return 0;
 
 	bvec = &bio->bi_io_vec[bio->bi_vcnt];
@@ -749,8 +741,6 @@ static int __bio_add_pc_page(struct request_queue *q, struct bio *bio,
 	bio->bi_vcnt++;
  done:
 	bio->bi_iter.bi_size += len;
-	bio->bi_phys_segments = bio->bi_vcnt;
-	bio_set_flag(bio, BIO_SEG_VALID);
 	return len;
 }
 
@@ -1910,10 +1900,7 @@ void bio_trim(struct bio *bio, int offset, int size)
 	if (offset == 0 && size == bio->bi_iter.bi_size)
 		return;
 
-	bio_clear_flag(bio, BIO_SEG_VALID);
-
 	bio_advance(bio, offset << 9);
-
 	bio->bi_iter.bi_size = size;
 
 	if (bio_integrity(bio))
diff --git a/block/blk-core.c b/block/blk-core.c
index a9cb465c7ef7..48088dff4ec0 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -537,15 +537,15 @@ void blk_put_request(struct request *req)
 }
 EXPORT_SYMBOL(blk_put_request);
 
-bool bio_attempt_back_merge(struct request_queue *q, struct request *req,
-			    struct bio *bio)
+bool bio_attempt_back_merge(struct request *req, struct bio *bio,
+		unsigned int nr_segs)
 {
 	const int ff = bio->bi_opf & REQ_FAILFAST_MASK;
 
-	if (!ll_back_merge_fn(q, req, bio))
+	if (!ll_back_merge_fn(req, bio, nr_segs))
 		return false;
 
-	trace_block_bio_backmerge(q, req, bio);
+	trace_block_bio_backmerge(req->q, req, bio);
 
 	if ((req->cmd_flags & REQ_FAILFAST_MASK) != ff)
 		blk_rq_set_mixed_merge(req);
@@ -558,15 +558,15 @@ bool bio_attempt_back_merge(struct request_queue *q, struct request *req,
 	return true;
 }
 
-bool bio_attempt_front_merge(struct request_queue *q, struct request *req,
-			     struct bio *bio)
+bool bio_attempt_front_merge(struct request *req, struct bio *bio,
+		unsigned int nr_segs)
 {
 	const int ff = bio->bi_opf & REQ_FAILFAST_MASK;
 
-	if (!ll_front_merge_fn(q, req, bio))
+	if (!ll_front_merge_fn(req, bio, nr_segs))
 		return false;
 
-	trace_block_bio_frontmerge(q, req, bio);
+	trace_block_bio_frontmerge(req->q, req, bio);
 
 	if ((req->cmd_flags & REQ_FAILFAST_MASK) != ff)
 		blk_rq_set_mixed_merge(req);
@@ -608,6 +608,7 @@ bool bio_attempt_discard_merge(struct request_queue *q, struct request *req,
  * blk_attempt_plug_merge - try to merge with %current's plugged list
  * @q: request_queue new bio is being queued at
  * @bio: new bio being queued
+ * @nr_segs: number of segments in @bio
  * @same_queue_rq: pointer to &struct request that gets filled in when
  * another request associated with @q is found on the plug list
  * (optional, may be %NULL)
@@ -626,7 +627,7 @@ bool bio_attempt_discard_merge(struct request_queue *q, struct request *req,
  * Caller must ensure !blk_queue_nomerges(q) beforehand.
  */
 bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio,
-			    struct request **same_queue_rq)
+		unsigned int nr_segs, struct request **same_queue_rq)
 {
 	struct blk_plug *plug;
 	struct request *rq;
@@ -655,10 +656,10 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio,
 
 		switch (blk_try_merge(rq, bio)) {
 		case ELEVATOR_BACK_MERGE:
-			merged = bio_attempt_back_merge(q, rq, bio);
+			merged = bio_attempt_back_merge(rq, bio, nr_segs);
 			break;
 		case ELEVATOR_FRONT_MERGE:
-			merged = bio_attempt_front_merge(q, rq, bio);
+			merged = bio_attempt_front_merge(rq, bio, nr_segs);
 			break;
 		case ELEVATOR_DISCARD_MERGE:
 			merged = bio_attempt_discard_merge(q, rq, bio);
@@ -1414,14 +1415,9 @@ bool blk_update_request(struct request *req, blk_status_t error,
 }
 EXPORT_SYMBOL_GPL(blk_update_request);
 
-void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
-		     struct bio *bio)
+void blk_rq_bio_prep(struct request *rq, struct bio *bio, unsigned int nr_segs)
 {
-	if (bio_has_data(bio))
-		rq->nr_phys_segments = bio_phys_segments(q, bio);
-	else if (bio_op(bio) == REQ_OP_DISCARD)
-		rq->nr_phys_segments = 1;
-
+	rq->nr_phys_segments = nr_segs;
 	rq->__data_len = bio->bi_iter.bi_size;
 	rq->bio = rq->biotail = bio;
 	rq->ioprio = bio_prio(bio);
diff --git a/block/blk-map.c b/block/blk-map.c
index db9373bd31ac..3a62e471d81b 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -18,13 +18,19 @@
 int blk_rq_append_bio(struct request *rq, struct bio **bio)
 {
 	struct bio *orig_bio = *bio;
+	struct bvec_iter iter;
+	struct bio_vec bv;
+	unsigned int nr_segs = 0;
 
 	blk_queue_bounce(rq->q, bio);
 
+	bio_for_each_bvec(bv, *bio, iter)
+		nr_segs++;
+
 	if (!rq->bio) {
-		blk_rq_bio_prep(rq->q, rq, *bio);
+		blk_rq_bio_prep(rq, *bio, nr_segs);
 	} else {
-		if (!ll_back_merge_fn(rq->q, rq, *bio)) {
+		if (!ll_back_merge_fn(rq, *bio, nr_segs)) {
 			if (orig_bio != *bio) {
 				bio_put(*bio);
 				*bio = orig_bio;
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 17713d7d98d5..72b4fd89a22d 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -258,32 +258,29 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
 	return do_split ? new : NULL;
 }
 
-void blk_queue_split(struct request_queue *q, struct bio **bio)
+void __blk_queue_split(struct request_queue *q, struct bio **bio,
+		unsigned int *nr_segs)
 {
-	struct bio *split, *res;
-	unsigned nsegs;
+	struct bio *split;
 
 	switch (bio_op(*bio)) {
 	case REQ_OP_DISCARD:
 	case REQ_OP_SECURE_ERASE:
-		split = blk_bio_discard_split(q, *bio, &q->bio_split, &nsegs);
+		split = blk_bio_discard_split(q, *bio, &q->bio_split, nr_segs);
 		break;
 	case REQ_OP_WRITE_ZEROES:
-		split = blk_bio_write_zeroes_split(q, *bio, &q->bio_split, &nsegs);
+		split = blk_bio_write_zeroes_split(q, *bio, &q->bio_split,
+				nr_segs);
 		break;
 	case REQ_OP_WRITE_SAME:
-		split = blk_bio_write_same_split(q, *bio, &q->bio_split, &nsegs);
+		split = blk_bio_write_same_split(q, *bio, &q->bio_split,
+				nr_segs);
 		break;
 	default:
-		split = blk_bio_segment_split(q, *bio, &q->bio_split, &nsegs);
+		split = blk_bio_segment_split(q, *bio, &q->bio_split, nr_segs);
 		break;
 	}
 
-	/* physical segments can be figured out during splitting */
-	res = split ? split : *bio;
-	res->bi_phys_segments = nsegs;
-	bio_set_flag(res, BIO_SEG_VALID);
-
 	if (split) {
 		/* there isn't chance to merge the splitted bio */
 		split->bi_opf |= REQ_NOMERGE;
@@ -304,6 +301,13 @@ void blk_queue_split(struct request_queue *q, struct bio **bio)
 		*bio = split;
 	}
 }
+
+void blk_queue_split(struct request_queue *q, struct bio **bio)
+{
+	unsigned int nr_segs;
+
+	__blk_queue_split(q, bio, &nr_segs);
+}
 EXPORT_SYMBOL(blk_queue_split);
 
 static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
@@ -338,17 +342,6 @@ void blk_recalc_rq_segments(struct request *rq)
 	rq->nr_phys_segments = __blk_recalc_rq_segments(rq->q, rq->bio);
 }
 
-void blk_recount_segments(struct request_queue *q, struct bio *bio)
-{
-	struct bio *nxt = bio->bi_next;
-
-	bio->bi_next = NULL;
-	bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio);
-	bio->bi_next = nxt;
-
-	bio_set_flag(bio, BIO_SEG_VALID);
-}
-
 static inline struct scatterlist *blk_next_sg(struct scatterlist **sg,
 		struct scatterlist *sglist)
 {
@@ -519,16 +512,13 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq,
 }
 EXPORT_SYMBOL(blk_rq_map_sg);
 
-static inline int ll_new_hw_segment(struct request_queue *q,
-				    struct request *req,
-				    struct bio *bio)
+static inline int ll_new_hw_segment(struct request *req, struct bio *bio,
+		unsigned int nr_phys_segs)
 {
-	int nr_phys_segs = bio_phys_segments(q, bio);
-
-	if (req->nr_phys_segments + nr_phys_segs > queue_max_segments(q))
+	if (req->nr_phys_segments + nr_phys_segs > queue_max_segments(req->q))
 		goto no_merge;
 
-	if (blk_integrity_merge_bio(q, req, bio) == false)
+	if (blk_integrity_merge_bio(req->q, req, bio) == false)
 		goto no_merge;
 
 	/*
@@ -539,12 +529,11 @@ static inline int ll_new_hw_segment(struct request_queue *q,
 	return 1;
 
 no_merge:
-	req_set_nomerge(q, req);
+	req_set_nomerge(req->q, req);
 	return 0;
 }
 
-int ll_back_merge_fn(struct request_queue *q, struct request *req,
-		     struct bio *bio)
+int ll_back_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs)
 {
 	if (req_gap_back_merge(req, bio))
 		return 0;
@@ -553,21 +542,15 @@ int ll_back_merge_fn(struct request_queue *q, struct request *req,
 		return 0;
 	if (blk_rq_sectors(req) + bio_sectors(bio) >
 	    blk_rq_get_max_sectors(req, blk_rq_pos(req))) {
-		req_set_nomerge(q, req);
+		req_set_nomerge(req->q, req);
 		return 0;
 	}
-	if (!bio_flagged(req->biotail, BIO_SEG_VALID))
-		blk_recount_segments(q, req->biotail);
-	if (!bio_flagged(bio, BIO_SEG_VALID))
-		blk_recount_segments(q, bio);
 
-	return ll_new_hw_segment(q, req, bio);
+	return ll_new_hw_segment(req, bio, nr_segs);
 }
 
-int ll_front_merge_fn(struct request_queue *q, struct request *req,
-		      struct bio *bio)
+int ll_front_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs)
 {
-
 	if (req_gap_front_merge(req, bio))
 		return 0;
 	if (blk_integrity_rq(req) &&
@@ -575,15 +558,11 @@ int ll_front_merge_fn(struct request_queue *q, struct request *req,
 		return 0;
 	if (blk_rq_sectors(req) + bio_sectors(bio) >
 	    blk_rq_get_max_sectors(req, bio->bi_iter.bi_sector)) {
-		req_set_nomerge(q, req);
+		req_set_nomerge(req->q, req);
 		return 0;
 	}
-	if (!bio_flagged(bio, BIO_SEG_VALID))
-		blk_recount_segments(q, bio);
-	if (!bio_flagged(req->bio, BIO_SEG_VALID))
-		blk_recount_segments(q, req->bio);
 
-	return ll_new_hw_segment(q, req, bio);
+	return ll_new_hw_segment(req, bio, nr_segs);
 }
 
 static bool req_attempt_discard_merge(struct request_queue *q, struct request *req,
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 74c6bb871f7e..72124d76b96a 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -224,7 +224,7 @@ void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx)
 }
 
 bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio,
-			    struct request **merged_request)
+		unsigned int nr_segs, struct request **merged_request)
 {
 	struct request *rq;
 
@@ -232,7 +232,7 @@ bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio,
 	case ELEVATOR_BACK_MERGE:
 		if (!blk_mq_sched_allow_merge(q, rq, bio))
 			return false;
-		if (!bio_attempt_back_merge(q, rq, bio))
+		if (!bio_attempt_back_merge(rq, bio, nr_segs))
 			return false;
 		*merged_request = attempt_back_merge(q, rq);
 		if (!*merged_request)
@@ -241,7 +241,7 @@ bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio,
 	case ELEVATOR_FRONT_MERGE:
 		if (!blk_mq_sched_allow_merge(q, rq, bio))
 			return false;
-		if (!bio_attempt_front_merge(q, rq, bio))
+		if (!bio_attempt_front_merge(rq, bio, nr_segs))
 			return false;
 		*merged_request = attempt_front_merge(q, rq);
 		if (!*merged_request)
@@ -260,7 +260,7 @@ EXPORT_SYMBOL_GPL(blk_mq_sched_try_merge);
  * of them.
  */
 bool blk_mq_bio_list_merge(struct request_queue *q, struct list_head *list,
-			   struct bio *bio)
+			   struct bio *bio, unsigned int nr_segs)
 {
 	struct request *rq;
 	int checked = 8;
@@ -277,11 +277,13 @@ bool blk_mq_bio_list_merge(struct request_queue *q, struct list_head *list,
 		switch (blk_try_merge(rq, bio)) {
 		case ELEVATOR_BACK_MERGE:
 			if (blk_mq_sched_allow_merge(q, rq, bio))
-				merged = bio_attempt_back_merge(q, rq, bio);
+				merged = bio_attempt_back_merge(rq, bio,
+						nr_segs);
 			break;
 		case ELEVATOR_FRONT_MERGE:
 			if (blk_mq_sched_allow_merge(q, rq, bio))
-				merged = bio_attempt_front_merge(q, rq, bio);
+				merged = bio_attempt_front_merge(rq, bio,
+						nr_segs);
 			break;
 		case ELEVATOR_DISCARD_MERGE:
 			merged = bio_attempt_discard_merge(q, rq, bio);
@@ -304,13 +306,14 @@ EXPORT_SYMBOL_GPL(blk_mq_bio_list_merge);
  */
 static bool blk_mq_attempt_merge(struct request_queue *q,
 				 struct blk_mq_hw_ctx *hctx,
-				 struct blk_mq_ctx *ctx, struct bio *bio)
+				 struct blk_mq_ctx *ctx, struct bio *bio,
+				 unsigned int nr_segs)
 {
 	enum hctx_type type = hctx->type;
 
 	lockdep_assert_held(&ctx->lock);
 
-	if (blk_mq_bio_list_merge(q, &ctx->rq_lists[type], bio)) {
+	if (blk_mq_bio_list_merge(q, &ctx->rq_lists[type], bio, nr_segs)) {
 		ctx->rq_merged++;
 		return true;
 	}
@@ -318,7 +321,8 @@ static bool blk_mq_attempt_merge(struct request_queue *q,
 	return false;
 }
 
-bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio)
+bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio,
+		unsigned int nr_segs)
 {
 	struct elevator_queue *e = q->elevator;
 	struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
@@ -328,7 +332,7 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio)
 
 	if (e && e->type->ops.bio_merge) {
 		blk_mq_put_ctx(ctx);
-		return e->type->ops.bio_merge(hctx, bio);
+		return e->type->ops.bio_merge(hctx, bio, nr_segs);
 	}
 
 	type = hctx->type;
@@ -336,7 +340,7 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio)
 			!list_empty_careful(&ctx->rq_lists[type])) {
 		/* default per sw-queue merge */
 		spin_lock(&ctx->lock);
-		ret = blk_mq_attempt_merge(q, hctx, ctx, bio);
+		ret = blk_mq_attempt_merge(q, hctx, ctx, bio, nr_segs);
 		spin_unlock(&ctx->lock);
 	}
 
diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h
index c7bdb52367ac..a1e5850ffb1d 100644
--- a/block/blk-mq-sched.h
+++ b/block/blk-mq-sched.h
@@ -12,8 +12,9 @@ void blk_mq_sched_assign_ioc(struct request *rq);
 
 void blk_mq_sched_request_inserted(struct request *rq);
 bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio,
-				struct request **merged_request);
-bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio);
+		unsigned int nr_segs, struct request **merged_request);
+bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio,
+		unsigned int nr_segs);
 bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq);
 void blk_mq_sched_mark_restart_hctx(struct blk_mq_hw_ctx *hctx);
 void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx);
@@ -30,12 +31,13 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e);
 void blk_mq_exit_sched(struct request_queue *q, struct elevator_queue *e);
 
 static inline bool
-blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio)
+blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio,
+		unsigned int nr_segs)
 {
 	if (blk_queue_nomerges(q) || !bio_mergeable(bio))
 		return false;
 
-	return __blk_mq_sched_bio_merge(q, bio);
+	return __blk_mq_sched_bio_merge(q, bio, nr_segs);
 }
 
 static inline bool
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 61457bffa55f..d89383847d09 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1764,14 +1764,15 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
 	}
 }
 
-static void blk_mq_bio_to_request(struct request *rq, struct bio *bio)
+static void blk_mq_bio_to_request(struct request *rq, struct bio *bio,
+		unsigned int nr_segs)
 {
 	if (bio->bi_opf & REQ_RAHEAD)
 		rq->cmd_flags |= REQ_FAILFAST_MASK;
 
 	rq->__sector = bio->bi_iter.bi_sector;
 	rq->write_hint = bio->bi_write_hint;
-	blk_rq_bio_prep(rq->q, rq, bio);
+	blk_rq_bio_prep(rq, bio, nr_segs);
 
 	blk_account_io_start(rq, true);
 }
@@ -1941,20 +1942,20 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 	struct request *rq;
 	struct blk_plug *plug;
 	struct request *same_queue_rq = NULL;
+	unsigned int nr_segs;
 	blk_qc_t cookie;
 
 	blk_queue_bounce(q, &bio);
-
-	blk_queue_split(q, &bio);
+	__blk_queue_split(q, &bio, &nr_segs);
 
 	if (!bio_integrity_prep(bio))
 		return BLK_QC_T_NONE;
 
 	if (!is_flush_fua && !blk_queue_nomerges(q) &&
-	    blk_attempt_plug_merge(q, bio, &same_queue_rq))
+	    blk_attempt_plug_merge(q, bio, nr_segs, &same_queue_rq))
 		return BLK_QC_T_NONE;
 
-	if (blk_mq_sched_bio_merge(q, bio))
+	if (blk_mq_sched_bio_merge(q, bio, nr_segs))
 		return BLK_QC_T_NONE;
 
 	rq_qos_throttle(q, bio);
@@ -1977,7 +1978,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 	plug = current->plug;
 	if (unlikely(is_flush_fua)) {
 		blk_mq_put_ctx(data.ctx);
-		blk_mq_bio_to_request(rq, bio);
+		blk_mq_bio_to_request(rq, bio, nr_segs);
 
 		/* bypass scheduler for flush rq */
 		blk_insert_flush(rq);
@@ -1991,7 +1992,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 		struct request *last = NULL;
 
 		blk_mq_put_ctx(data.ctx);
-		blk_mq_bio_to_request(rq, bio);
+		blk_mq_bio_to_request(rq, bio, nr_segs);
 
 		if (!request_count)
 			trace_block_plug(q);
@@ -2006,7 +2007,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 
 		blk_add_rq_to_plug(plug, rq);
 	} else if (plug && !blk_queue_nomerges(q)) {
-		blk_mq_bio_to_request(rq, bio);
+		blk_mq_bio_to_request(rq, bio, nr_segs);
 
 		/*
 		 * We do limited plugging. If the bio can be merged, do that.
@@ -2035,11 +2036,11 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 	} else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator &&
 			!data.hctx->dispatch_busy)) {
 		blk_mq_put_ctx(data.ctx);
-		blk_mq_bio_to_request(rq, bio);
+		blk_mq_bio_to_request(rq, bio, nr_segs);
 		blk_mq_try_issue_directly(data.hctx, rq, &cookie);
 	} else {
 		blk_mq_put_ctx(data.ctx);
-		blk_mq_bio_to_request(rq, bio);
+		blk_mq_bio_to_request(rq, bio, nr_segs);
 		blk_mq_sched_insert_request(rq, false, true, true);
 	}
 
diff --git a/block/blk.h b/block/blk.h
index 91b3581b7c7a..1390e8dbcdae 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -50,8 +50,7 @@ struct blk_flush_queue *blk_alloc_flush_queue(struct request_queue *q,
 		int node, int cmd_size, gfp_t flags);
 void blk_free_flush_queue(struct blk_flush_queue *q);
 
-void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
-			struct bio *bio);
+void blk_rq_bio_prep(struct request *rq, struct bio *bio, unsigned int nr_segs);
 void blk_freeze_queue(struct request_queue *q);
 
 static inline void blk_queue_enter_live(struct request_queue *q)
@@ -153,14 +152,14 @@ static inline bool bio_integrity_endio(struct bio *bio)
 unsigned long blk_rq_timeout(unsigned long timeout);
 void blk_add_timer(struct request *req);
 
-bool bio_attempt_front_merge(struct request_queue *q, struct request *req,
-			     struct bio *bio);
-bool bio_attempt_back_merge(struct request_queue *q, struct request *req,
-			    struct bio *bio);
+bool bio_attempt_front_merge(struct request *req, struct bio *bio,
+		unsigned int nr_segs);
+bool bio_attempt_back_merge(struct request *req, struct bio *bio,
+		unsigned int nr_segs);
 bool bio_attempt_discard_merge(struct request_queue *q, struct request *req,
 		struct bio *bio);
 bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio,
-			    struct request **same_queue_rq);
+		unsigned int nr_segs, struct request **same_queue_rq);
 
 void blk_account_io_start(struct request *req, bool new_io);
 void blk_account_io_completion(struct request *req, unsigned int bytes);
@@ -194,10 +193,12 @@ static inline int blk_should_fake_timeout(struct request_queue *q)
 }
 #endif
 
-int ll_back_merge_fn(struct request_queue *q, struct request *req,
-		     struct bio *bio);
-int ll_front_merge_fn(struct request_queue *q, struct request *req, 
-		      struct bio *bio);
+void __blk_queue_split(struct request_queue *q, struct bio **bio,
+		unsigned int *nr_segs);
+int ll_back_merge_fn(struct request *req, struct bio *bio,
+		unsigned int nr_segs);
+int ll_front_merge_fn(struct request *req,  struct bio *bio,
+		unsigned int nr_segs);
 struct request *attempt_back_merge(struct request_queue *q, struct request *rq);
 struct request *attempt_front_merge(struct request_queue *q, struct request *rq);
 int blk_attempt_req_merge(struct request_queue *q, struct request *rq,
diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
index c3b05119cebd..3c2602601741 100644
--- a/block/kyber-iosched.c
+++ b/block/kyber-iosched.c
@@ -562,7 +562,8 @@ static void kyber_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
 	}
 }
 
-static bool kyber_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio)
+static bool kyber_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio,
+		unsigned int nr_segs)
 {
 	struct kyber_hctx_data *khd = hctx->sched_data;
 	struct blk_mq_ctx *ctx = blk_mq_get_ctx(hctx->queue);
@@ -572,7 +573,7 @@ static bool kyber_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio)
 	bool merged;
 
 	spin_lock(&kcq->lock);
-	merged = blk_mq_bio_list_merge(hctx->queue, rq_list, bio);
+	merged = blk_mq_bio_list_merge(hctx->queue, rq_list, bio, nr_segs);
 	spin_unlock(&kcq->lock);
 	blk_mq_put_ctx(ctx);
 
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 1876f5712bfd..b8a682b5a1bb 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -469,7 +469,8 @@ static int dd_request_merge(struct request_queue *q, struct request **rq,
 	return ELEVATOR_NO_MERGE;
 }
 
-static bool dd_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio)
+static bool dd_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio,
+		unsigned int nr_segs)
 {
 	struct request_queue *q = hctx->queue;
 	struct deadline_data *dd = q->elevator->elevator_data;
@@ -477,7 +478,7 @@ static bool dd_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio)
 	bool ret;
 
 	spin_lock(&dd->lock);
-	ret = blk_mq_sched_try_merge(q, bio, &free);
+	ret = blk_mq_sched_try_merge(q, bio, nr_segs, &free);
 	spin_unlock(&dd->lock);
 
 	if (free)
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index b83bce2beb66..27f56ba442f8 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -5251,7 +5251,6 @@ static int raid5_read_one_chunk(struct mddev *mddev, struct bio *raid_bio)
 		rcu_read_unlock();
 		raid_bio->bi_next = (void*)rdev;
 		bio_set_dev(align_bi, rdev->bdev);
-		bio_clear_flag(align_bi, BIO_SEG_VALID);
 
 		if (is_badblock(rdev, align_bi->bi_iter.bi_sector,
 				bio_sectors(align_bi),
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 0f23b5682640..ee11c4324751 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -408,7 +408,6 @@ static inline void bio_wouldblock_error(struct bio *bio)
 }
 
 struct request_queue;
-extern int bio_phys_segments(struct request_queue *, struct bio *);
 
 extern int submit_bio_wait(struct bio *bio);
 extern void bio_advance(struct bio *, unsigned);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 15d1aa53d96c..3fa1fa59f9b2 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -306,7 +306,7 @@ void blk_mq_delay_kick_requeue_list(struct request_queue *q, unsigned long msecs
 bool blk_mq_complete_request(struct request *rq);
 void blk_mq_complete_request_sync(struct request *rq);
 bool blk_mq_bio_list_merge(struct request_queue *q, struct list_head *list,
-			   struct bio *bio);
+			   struct bio *bio, unsigned int nr_segs);
 bool blk_mq_queue_stopped(struct request_queue *q);
 void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx);
 void blk_mq_start_hw_queue(struct blk_mq_hw_ctx *hctx);
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 95202f80676c..6a53799c3fe2 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -154,11 +154,6 @@ struct bio {
 	blk_status_t		bi_status;
 	u8			bi_partno;
 
-	/* Number of segments in this BIO after
-	 * physical address coalescing is performed.
-	 */
-	unsigned int		bi_phys_segments;
-
 	struct bvec_iter	bi_iter;
 
 	atomic_t		__bi_remaining;
@@ -210,7 +205,6 @@ struct bio {
  */
 enum {
 	BIO_NO_PAGE_REF,	/* don't put release vec pages */
-	BIO_SEG_VALID,		/* bi_phys_segments valid */
 	BIO_CLONED,		/* doesn't own data */
 	BIO_BOUNCED,		/* bio is a bounce bio */
 	BIO_USER_MAPPED,	/* contains user pages */
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index c67a9510e532..56cfd9b5482a 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -841,7 +841,6 @@ extern blk_status_t blk_insert_cloned_request(struct request_queue *q,
 				     struct request *rq);
 extern int blk_rq_append_bio(struct request *rq, struct bio **bio);
 extern void blk_queue_split(struct request_queue *, struct bio **);
-extern void blk_recount_segments(struct request_queue *, struct bio *);
 extern int scsi_verify_blk_ioctl(struct block_device *, unsigned int);
 extern int scsi_cmd_blk_ioctl(struct block_device *, fmode_t,
 			      unsigned int, void __user *);
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index 6e8bc53740f0..169bb2e02516 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -34,7 +34,7 @@ struct elevator_mq_ops {
 	void (*depth_updated)(struct blk_mq_hw_ctx *);
 
 	bool (*allow_merge)(struct request_queue *, struct request *, struct bio *);
-	bool (*bio_merge)(struct blk_mq_hw_ctx *, struct bio *);
+	bool (*bio_merge)(struct blk_mq_hw_ctx *, struct bio *, unsigned int);
 	int (*request_merge)(struct request_queue *q, struct request **, struct bio *);
 	void (*request_merged)(struct request_queue *, struct request *, enum elv_merge);
 	void (*requests_merged)(struct request_queue *, struct request *, struct request *);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 4/6] block: simplify blk_recalc_rq_segments
  2019-06-06 10:28 remove bi_phys_segments and related cleanups Christoph Hellwig
                   ` (2 preceding siblings ...)
  2019-06-06 10:29 ` [PATCH 3/6] block: remove the bi_phys_segments field in struct bio Christoph Hellwig
@ 2019-06-06 10:29 ` Christoph Hellwig
  2019-06-07  6:03   ` Hannes Reinecke
  2019-06-06 10:29 ` [PATCH 5/6] block: untangle the end of blk_bio_segment_split Christoph Hellwig
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2019-06-06 10:29 UTC (permalink / raw)
  To: axboe; +Cc: Matias Bjorling, linux-block

Return the segement and let the callers assign them, which makes the code
a littler more obvious.  Also pass the request instead of q plus bio
chain, allowing for the use of rq_for_each_bvec.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-core.c  |  4 ++--
 block/blk-merge.c | 21 ++++++---------------
 block/blk.h       |  2 +-
 3 files changed, 9 insertions(+), 18 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 48088dff4ec0..2287b8c2979c 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1139,7 +1139,7 @@ static int blk_cloned_rq_check_limits(struct request_queue *q,
 	 * Recalculate it to check the request correctly on this queue's
 	 * limitation.
 	 */
-	blk_recalc_rq_segments(rq);
+	rq->nr_phys_segments = blk_recalc_rq_segments(rq);
 	if (rq->nr_phys_segments > queue_max_segments(q)) {
 		printk(KERN_ERR "%s: over max segments limit. (%hu > %hu)\n",
 			__func__, rq->nr_phys_segments, queue_max_segments(q));
@@ -1408,7 +1408,7 @@ bool blk_update_request(struct request *req, blk_status_t error,
 		}
 
 		/* recalculate the number of segments */
-		blk_recalc_rq_segments(req);
+		req->nr_phys_segments = blk_recalc_rq_segments(req);
 	}
 
 	return true;
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 72b4fd89a22d..2ea21ffd5f72 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -310,17 +310,16 @@ void blk_queue_split(struct request_queue *q, struct bio **bio)
 }
 EXPORT_SYMBOL(blk_queue_split);
 
-static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
-					     struct bio *bio)
+unsigned int blk_recalc_rq_segments(struct request *rq)
 {
 	unsigned int nr_phys_segs = 0;
-	struct bvec_iter iter;
+	struct req_iterator iter;
 	struct bio_vec bv;
 
-	if (!bio)
+	if (!rq->bio)
 		return 0;
 
-	switch (bio_op(bio)) {
+	switch (bio_op(rq->bio)) {
 	case REQ_OP_DISCARD:
 	case REQ_OP_SECURE_ERASE:
 	case REQ_OP_WRITE_ZEROES:
@@ -329,19 +328,11 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
 		return 1;
 	}
 
-	for_each_bio(bio) {
-		bio_for_each_bvec(bv, bio, iter)
-			bvec_split_segs(q, &bv, &nr_phys_segs, NULL, UINT_MAX);
-	}
-
+	rq_for_each_bvec(bv, rq, iter)
+		bvec_split_segs(rq->q, &bv, &nr_phys_segs, NULL, UINT_MAX);
 	return nr_phys_segs;
 }
 
-void blk_recalc_rq_segments(struct request *rq)
-{
-	rq->nr_phys_segments = __blk_recalc_rq_segments(rq->q, rq->bio);
-}
-
 static inline struct scatterlist *blk_next_sg(struct scatterlist **sg,
 		struct scatterlist *sglist)
 {
diff --git a/block/blk.h b/block/blk.h
index 1390e8dbcdae..befdab456209 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -203,7 +203,7 @@ struct request *attempt_back_merge(struct request_queue *q, struct request *rq);
 struct request *attempt_front_merge(struct request_queue *q, struct request *rq);
 int blk_attempt_req_merge(struct request_queue *q, struct request *rq,
 				struct request *next);
-void blk_recalc_rq_segments(struct request *rq);
+unsigned int blk_recalc_rq_segments(struct request *rq);
 void blk_rq_set_mixed_merge(struct request *rq);
 bool blk_rq_merge_ok(struct request *rq, struct bio *bio);
 enum elv_merge blk_try_merge(struct request *rq, struct bio *bio);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 5/6] block: untangle the end of blk_bio_segment_split
  2019-06-06 10:28 remove bi_phys_segments and related cleanups Christoph Hellwig
                   ` (3 preceding siblings ...)
  2019-06-06 10:29 ` [PATCH 4/6] block: simplify blk_recalc_rq_segments Christoph Hellwig
@ 2019-06-06 10:29 ` Christoph Hellwig
  2019-06-07  6:04   ` Hannes Reinecke
  2019-06-06 10:29 ` [PATCH 6/6] block: mark blk_rq_bio_prep as inline Christoph Hellwig
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2019-06-06 10:29 UTC (permalink / raw)
  To: axboe; +Cc: Matias Bjorling, linux-block

Now that we don't need to assign the front/back segment sizes, we can
duplicating the segs assignment for the split vs no-split case and
remove a whole chunk of boilerplate code.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-merge.c | 14 +++-----------
 1 file changed, 3 insertions(+), 11 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 2ea21ffd5f72..ca45eb51c669 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -202,8 +202,6 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
 	struct bio_vec bv, bvprv, *bvprvp = NULL;
 	struct bvec_iter iter;
 	unsigned nsegs = 0, sectors = 0;
-	bool do_split = true;
-	struct bio *new = NULL;
 	const unsigned max_sectors = get_max_io_size(q, bio);
 	const unsigned max_segs = queue_max_segments(q);
 
@@ -245,17 +243,11 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
 		}
 	}
 
-	do_split = false;
+	*segs = nsegs;
+	return NULL;
 split:
 	*segs = nsegs;
-
-	if (do_split) {
-		new = bio_split(bio, sectors, GFP_NOIO, bs);
-		if (new)
-			bio = new;
-	}
-
-	return do_split ? new : NULL;
+	return bio_split(bio, sectors, GFP_NOIO, bs);
 }
 
 void __blk_queue_split(struct request_queue *q, struct bio **bio,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 6/6] block: mark blk_rq_bio_prep as inline
  2019-06-06 10:28 remove bi_phys_segments and related cleanups Christoph Hellwig
                   ` (4 preceding siblings ...)
  2019-06-06 10:29 ` [PATCH 5/6] block: untangle the end of blk_bio_segment_split Christoph Hellwig
@ 2019-06-06 10:29 ` Christoph Hellwig
  2019-06-07  6:04   ` Hannes Reinecke
  2019-06-20  9:44 ` remove bi_phys_segments and related cleanups Christoph Hellwig
  2019-06-20 16:33 ` Jens Axboe
  7 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2019-06-06 10:29 UTC (permalink / raw)
  To: axboe; +Cc: Matias Bjorling, linux-block, Chaitanya Kulkarni

This function just has a few trivial assignments, has two callers with
one of them being in the fastpath.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 block/blk-core.c | 11 -----------
 block/blk.h      | 13 ++++++++++++-
 2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 2287b8c2979c..42f8f09dacf0 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1415,17 +1415,6 @@ bool blk_update_request(struct request *req, blk_status_t error,
 }
 EXPORT_SYMBOL_GPL(blk_update_request);
 
-void blk_rq_bio_prep(struct request *rq, struct bio *bio, unsigned int nr_segs)
-{
-	rq->nr_phys_segments = nr_segs;
-	rq->__data_len = bio->bi_iter.bi_size;
-	rq->bio = rq->biotail = bio;
-	rq->ioprio = bio_prio(bio);
-
-	if (bio->bi_disk)
-		rq->rq_disk = bio->bi_disk;
-}
-
 #if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE
 /**
  * rq_flush_dcache_pages - Helper function to flush all pages in a request
diff --git a/block/blk.h b/block/blk.h
index befdab456209..2097b1905dc7 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -50,7 +50,6 @@ struct blk_flush_queue *blk_alloc_flush_queue(struct request_queue *q,
 		int node, int cmd_size, gfp_t flags);
 void blk_free_flush_queue(struct blk_flush_queue *q);
 
-void blk_rq_bio_prep(struct request *rq, struct bio *bio, unsigned int nr_segs);
 void blk_freeze_queue(struct request_queue *q);
 
 static inline void blk_queue_enter_live(struct request_queue *q)
@@ -99,6 +98,18 @@ static inline bool bvec_gap_to_prev(struct request_queue *q,
 	return __bvec_gap_to_prev(q, bprv, offset);
 }
 
+static inline void blk_rq_bio_prep(struct request *rq, struct bio *bio,
+		unsigned int nr_segs)
+{
+	rq->nr_phys_segments = nr_segs;
+	rq->__data_len = bio->bi_iter.bi_size;
+	rq->bio = rq->biotail = bio;
+	rq->ioprio = bio_prio(bio);
+
+	if (bio->bi_disk)
+		rq->rq_disk = bio->bi_disk;
+}
+
 #ifdef CONFIG_BLK_DEV_INTEGRITY
 void blk_flush_integrity(void);
 bool __bio_integrity_endio(struct bio *);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH 1/6] block: initialize the write priority in blk_rq_bio_prep
  2019-06-06 10:28 ` [PATCH 1/6] block: initialize the write priority in blk_rq_bio_prep Christoph Hellwig
@ 2019-06-07  5:58   ` Hannes Reinecke
  2019-06-07 12:20   ` Minwoo Im
  2019-06-20 13:40   ` Minwoo Im
  2 siblings, 0 replies; 32+ messages in thread
From: Hannes Reinecke @ 2019-06-07  5:58 UTC (permalink / raw)
  To: Christoph Hellwig, axboe; +Cc: Matias Bjorling, linux-block, Chaitanya Kulkarni

On 6/6/19 12:28 PM, Christoph Hellwig wrote:
> The priority field also makes sense for passthrough requests, so
> initialize it in blk_rq_bio_prep.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
> ---
>  block/blk-core.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/block/blk-core.c b/block/blk-core.c
> index ee1b35fe8572..9b88b1a3eb43 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -680,7 +680,6 @@ void blk_init_request_from_bio(struct request *req, struct bio *bio)
>  		req->cmd_flags |= REQ_FAILFAST_MASK;
>  
>  	req->__sector = bio->bi_iter.bi_sector;
> -	req->ioprio = bio_prio(bio);
>  	req->write_hint = bio->bi_write_hint;
>  	blk_rq_bio_prep(req->q, req, bio);
>  }
> @@ -1436,6 +1435,7 @@ void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
>  
>  	rq->__data_len = bio->bi_iter.bi_size;
>  	rq->bio = rq->biotail = bio;
> +	rq->ioprio = bio_prio(bio);
>  
>  	if (bio->bi_disk)
>  		rq->rq_disk = bio->bi_disk;
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/6] block: remove blk_init_request_from_bio
  2019-06-06 10:29 ` [PATCH 2/6] block: remove blk_init_request_from_bio Christoph Hellwig
@ 2019-06-07  5:59   ` Hannes Reinecke
  2019-06-07 12:23   ` Minwoo Im
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 32+ messages in thread
From: Hannes Reinecke @ 2019-06-07  5:59 UTC (permalink / raw)
  To: Christoph Hellwig, axboe; +Cc: Matias Bjorling, linux-block

On 6/6/19 12:29 PM, Christoph Hellwig wrote:
> lightnvm should have never used this function, as it is sending
> passthrough requests, so switch it to blk_rq_append_bio like all the
> other passthrough request users.  Inline blk_init_request_from_bio into
> the only remaining caller.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  block/blk-core.c             | 11 -----------
>  block/blk-mq.c               |  7 ++++++-
>  drivers/nvme/host/lightnvm.c |  2 +-
>  include/linux/blkdev.h       |  1 -
>  4 files changed, 7 insertions(+), 14 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/6] block: remove the bi_phys_segments field in struct bio
  2019-06-06 10:29 ` [PATCH 3/6] block: remove the bi_phys_segments field in struct bio Christoph Hellwig
@ 2019-06-07  6:02   ` Hannes Reinecke
  2019-06-07 16:45     ` Christoph Hellwig
  2019-06-09 22:17     ` Nikolay Borisov
  2019-06-10 18:30   ` Bart Van Assche
  1 sibling, 2 replies; 32+ messages in thread
From: Hannes Reinecke @ 2019-06-07  6:02 UTC (permalink / raw)
  To: Christoph Hellwig, axboe; +Cc: Matias Bjorling, linux-block

On 6/6/19 12:29 PM, Christoph Hellwig wrote:
> We only need the number of segments in the blk-mq submission path.
> Remove the field from struct bio, and return it from a variant of
> blk_queue_split instead of that it can passed as an argument to
> those functions that need the value.
> 
> This also means we stop recounting segments except for cloning
> and partial segments.
> 
> To keep the number of arguments in this how path down remove
> pointless struct request_queue arguments from any of the functions
> that had it and grew a nr_segs argument.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  Documentation/block/biodoc.txt |  1 -
>  block/bfq-iosched.c            |  5 ++-
>  block/bio.c                    | 15 +------
>  block/blk-core.c               | 32 +++++++--------
>  block/blk-map.c                | 10 ++++-
>  block/blk-merge.c              | 75 ++++++++++++----------------------
>  block/blk-mq-sched.c           | 26 +++++++-----
>  block/blk-mq-sched.h           | 10 +++--
>  block/blk-mq.c                 | 23 ++++++-----
>  block/blk.h                    | 23 ++++++-----
>  block/kyber-iosched.c          |  5 ++-
>  block/mq-deadline.c            |  5 ++-
>  drivers/md/raid5.c             |  1 -
>  include/linux/bio.h            |  1 -
>  include/linux/blk-mq.h         |  2 +-
>  include/linux/blk_types.h      |  6 ---
>  include/linux/blkdev.h         |  1 -
>  include/linux/elevator.h       |  2 +-
>  18 files changed, 106 insertions(+), 137 deletions(-)
> 
In general a very good idea, but:

> @@ -304,6 +301,13 @@ void blk_queue_split(struct request_queue *q, struct bio **bio)
>  		*bio = split;
>  	}
>  }
> +
> +void blk_queue_split(struct request_queue *q, struct bio **bio)
> +{
> +	unsigned int nr_segs;
> +
> +	__blk_queue_split(q, bio, &nr_segs);
> +}
>  EXPORT_SYMBOL(blk_queue_split);
>  
That looks a bit weird, and I guess some or other compiler might
complain here about nr_segs being unused.
Can't we modify __blk_queue_split() to accept a NULL argument here?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 4/6] block: simplify blk_recalc_rq_segments
  2019-06-06 10:29 ` [PATCH 4/6] block: simplify blk_recalc_rq_segments Christoph Hellwig
@ 2019-06-07  6:03   ` Hannes Reinecke
  0 siblings, 0 replies; 32+ messages in thread
From: Hannes Reinecke @ 2019-06-07  6:03 UTC (permalink / raw)
  To: Christoph Hellwig, axboe; +Cc: Matias Bjorling, linux-block

On 6/6/19 12:29 PM, Christoph Hellwig wrote:
> Return the segement and let the callers assign them, which makes the code
> a littler more obvious.  Also pass the request instead of q plus bio
> chain, allowing for the use of rq_for_each_bvec.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  block/blk-core.c  |  4 ++--
>  block/blk-merge.c | 21 ++++++---------------
>  block/blk.h       |  2 +-
>  3 files changed, 9 insertions(+), 18 deletions(-)
> Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 5/6] block: untangle the end of blk_bio_segment_split
  2019-06-06 10:29 ` [PATCH 5/6] block: untangle the end of blk_bio_segment_split Christoph Hellwig
@ 2019-06-07  6:04   ` Hannes Reinecke
  0 siblings, 0 replies; 32+ messages in thread
From: Hannes Reinecke @ 2019-06-07  6:04 UTC (permalink / raw)
  To: Christoph Hellwig, axboe; +Cc: Matias Bjorling, linux-block

On 6/6/19 12:29 PM, Christoph Hellwig wrote:
> Now that we don't need to assign the front/back segment sizes, we can
> duplicating the segs assignment for the split vs no-split case and
> remove a whole chunk of boilerplate code.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  block/blk-merge.c | 14 +++-----------
>  1 file changed, 3 insertions(+), 11 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 6/6] block: mark blk_rq_bio_prep as inline
  2019-06-06 10:29 ` [PATCH 6/6] block: mark blk_rq_bio_prep as inline Christoph Hellwig
@ 2019-06-07  6:04   ` Hannes Reinecke
  0 siblings, 0 replies; 32+ messages in thread
From: Hannes Reinecke @ 2019-06-07  6:04 UTC (permalink / raw)
  To: Christoph Hellwig, axboe; +Cc: Matias Bjorling, linux-block, Chaitanya Kulkarni

On 6/6/19 12:29 PM, Christoph Hellwig wrote:
> This function just has a few trivial assignments, has two callers with
> one of them being in the fastpath.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
> ---
>  block/blk-core.c | 11 -----------
>  block/blk.h      | 13 ++++++++++++-
>  2 files changed, 12 insertions(+), 12 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 1/6] block: initialize the write priority in blk_rq_bio_prep
  2019-06-06 10:28 ` [PATCH 1/6] block: initialize the write priority in blk_rq_bio_prep Christoph Hellwig
  2019-06-07  5:58   ` Hannes Reinecke
@ 2019-06-07 12:20   ` Minwoo Im
  2019-06-20 13:40   ` Minwoo Im
  2 siblings, 0 replies; 32+ messages in thread
From: Minwoo Im @ 2019-06-07 12:20 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: axboe, Matias Bjorling, linux-block, Chaitanya Kulkarni

It looks good to me:

Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/6] block: remove blk_init_request_from_bio
  2019-06-06 10:29 ` [PATCH 2/6] block: remove blk_init_request_from_bio Christoph Hellwig
  2019-06-07  5:59   ` Hannes Reinecke
@ 2019-06-07 12:23   ` Minwoo Im
  2019-06-07 12:57   ` Javier González
  2019-06-09 19:51   ` Matias Bjørling
  3 siblings, 0 replies; 32+ messages in thread
From: Minwoo Im @ 2019-06-07 12:23 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: axboe, Matias Bjorling, linux-block

It looks good to me.

Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/6] block: remove blk_init_request_from_bio
  2019-06-06 10:29 ` [PATCH 2/6] block: remove blk_init_request_from_bio Christoph Hellwig
  2019-06-07  5:59   ` Hannes Reinecke
  2019-06-07 12:23   ` Minwoo Im
@ 2019-06-07 12:57   ` Javier González
  2019-06-09 19:51   ` Matias Bjørling
  3 siblings, 0 replies; 32+ messages in thread
From: Javier González @ 2019-06-07 12:57 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, Matias Bjørling, linux-block

[-- Attachment #1: Type: text/plain, Size: 3239 bytes --]

> On 6 Jun 2019, at 12.29, Christoph Hellwig <hch@lst.de> wrote:
> 
> lightnvm should have never used this function, as it is sending
> passthrough requests, so switch it to blk_rq_append_bio like all the
> other passthrough request users.  Inline blk_init_request_from_bio into
> the only remaining caller.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> block/blk-core.c             | 11 -----------
> block/blk-mq.c               |  7 ++++++-
> drivers/nvme/host/lightnvm.c |  2 +-
> include/linux/blkdev.h       |  1 -
> 4 files changed, 7 insertions(+), 14 deletions(-)
> 
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 9b88b1a3eb43..a9cb465c7ef7 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -674,17 +674,6 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio,
> 	return false;
> }
> 
> -void blk_init_request_from_bio(struct request *req, struct bio *bio)
> -{
> -	if (bio->bi_opf & REQ_RAHEAD)
> -		req->cmd_flags |= REQ_FAILFAST_MASK;
> -
> -	req->__sector = bio->bi_iter.bi_sector;
> -	req->write_hint = bio->bi_write_hint;
> -	blk_rq_bio_prep(req->q, req, bio);
> -}
> -EXPORT_SYMBOL_GPL(blk_init_request_from_bio);
> -
> static void handle_bad_sector(struct bio *bio, sector_t maxsector)
> {
> 	char b[BDEVNAME_SIZE];
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index ce0f5f4ede70..61457bffa55f 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1766,7 +1766,12 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
> 
> static void blk_mq_bio_to_request(struct request *rq, struct bio *bio)
> {
> -	blk_init_request_from_bio(rq, bio);
> +	if (bio->bi_opf & REQ_RAHEAD)
> +		rq->cmd_flags |= REQ_FAILFAST_MASK;
> +
> +	rq->__sector = bio->bi_iter.bi_sector;
> +	rq->write_hint = bio->bi_write_hint;
> +	blk_rq_bio_prep(rq->q, rq, bio);
> 
> 	blk_account_io_start(rq, true);
> }
> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
> index 4f20a10b39d3..ba009d4c9dfa 100644
> --- a/drivers/nvme/host/lightnvm.c
> +++ b/drivers/nvme/host/lightnvm.c
> @@ -660,7 +660,7 @@ static struct request *nvme_nvm_alloc_request(struct request_queue *q,
> 	rq->cmd_flags &= ~REQ_FAILFAST_DRIVER;
> 
> 	if (rqd->bio)
> -		blk_init_request_from_bio(rq, rqd->bio);
> +		blk_rq_append_bio(rq, &rqd->bio);
> 	else
> 		rq->ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, IOPRIO_NORM);
> 
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index 592669bcc536..c67a9510e532 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -828,7 +828,6 @@ extern void blk_unregister_queue(struct gendisk *disk);
> extern blk_qc_t generic_make_request(struct bio *bio);
> extern blk_qc_t direct_make_request(struct bio *bio);
> extern void blk_rq_init(struct request_queue *q, struct request *rq);
> -extern void blk_init_request_from_bio(struct request *req, struct bio *bio);
> extern void blk_put_request(struct request *);
> extern struct request *blk_get_request(struct request_queue *, unsigned int op,
> 				       blk_mq_req_flags_t flags);
> --
> 2.20.1


Looks good.

Reviewed-by: Javier González <javier@javigon.com>


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/6] block: remove the bi_phys_segments field in struct bio
  2019-06-07  6:02   ` Hannes Reinecke
@ 2019-06-07 16:45     ` Christoph Hellwig
  2019-06-09 22:17     ` Nikolay Borisov
  1 sibling, 0 replies; 32+ messages in thread
From: Christoph Hellwig @ 2019-06-07 16:45 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: Christoph Hellwig, axboe, Matias Bjorling, linux-block

On Fri, Jun 07, 2019 at 08:02:55AM +0200, Hannes Reinecke wrote:
> > +void blk_queue_split(struct request_queue *q, struct bio **bio)
> > +{
> > +	unsigned int nr_segs;
> > +
> > +	__blk_queue_split(q, bio, &nr_segs);
> > +}
> >  EXPORT_SYMBOL(blk_queue_split);
> >  
> That looks a bit weird, and I guess some or other compiler might
> complain here about nr_segs being unused.
> Can't we modify __blk_queue_split() to accept a NULL argument here?

We could.  But that would bloat the fast path for absolutely no
reason.  Passing a by reference output argument that is then ignored
is a pretty common pattern.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/6] block: remove blk_init_request_from_bio
  2019-06-06 10:29 ` [PATCH 2/6] block: remove blk_init_request_from_bio Christoph Hellwig
                     ` (2 preceding siblings ...)
  2019-06-07 12:57   ` Javier González
@ 2019-06-09 19:51   ` Matias Bjørling
  3 siblings, 0 replies; 32+ messages in thread
From: Matias Bjørling @ 2019-06-09 19:51 UTC (permalink / raw)
  To: Christoph Hellwig, axboe; +Cc: linux-block

On 6/6/19 12:29 PM, Christoph Hellwig wrote:
> lightnvm should have never used this function, as it is sending
> passthrough requests, so switch it to blk_rq_append_bio like all the
> other passthrough request users.  Inline blk_init_request_from_bio into
> the only remaining caller.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/blk-core.c             | 11 -----------
>   block/blk-mq.c               |  7 ++++++-
>   drivers/nvme/host/lightnvm.c |  2 +-
>   include/linux/blkdev.h       |  1 -
>   4 files changed, 7 insertions(+), 14 deletions(-)
> 
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 9b88b1a3eb43..a9cb465c7ef7 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -674,17 +674,6 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio,
>   	return false;
>   }
>   
> -void blk_init_request_from_bio(struct request *req, struct bio *bio)
> -{
> -	if (bio->bi_opf & REQ_RAHEAD)
> -		req->cmd_flags |= REQ_FAILFAST_MASK;
> -
> -	req->__sector = bio->bi_iter.bi_sector;
> -	req->write_hint = bio->bi_write_hint;
> -	blk_rq_bio_prep(req->q, req, bio);
> -}
> -EXPORT_SYMBOL_GPL(blk_init_request_from_bio);
> -
>   static void handle_bad_sector(struct bio *bio, sector_t maxsector)
>   {
>   	char b[BDEVNAME_SIZE];
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index ce0f5f4ede70..61457bffa55f 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1766,7 +1766,12 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
>   
>   static void blk_mq_bio_to_request(struct request *rq, struct bio *bio)
>   {
> -	blk_init_request_from_bio(rq, bio);
> +	if (bio->bi_opf & REQ_RAHEAD)
> +		rq->cmd_flags |= REQ_FAILFAST_MASK;
> +
> +	rq->__sector = bio->bi_iter.bi_sector;
> +	rq->write_hint = bio->bi_write_hint;
> +	blk_rq_bio_prep(rq->q, rq, bio);
>   
>   	blk_account_io_start(rq, true);
>   }
> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
> index 4f20a10b39d3..ba009d4c9dfa 100644
> --- a/drivers/nvme/host/lightnvm.c
> +++ b/drivers/nvme/host/lightnvm.c
> @@ -660,7 +660,7 @@ static struct request *nvme_nvm_alloc_request(struct request_queue *q,
>   	rq->cmd_flags &= ~REQ_FAILFAST_DRIVER;
>   
>   	if (rqd->bio)
> -		blk_init_request_from_bio(rq, rqd->bio);
> +		blk_rq_append_bio(rq, &rqd->bio);
>   	else
>   		rq->ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, IOPRIO_NORM);
>   
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index 592669bcc536..c67a9510e532 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -828,7 +828,6 @@ extern void blk_unregister_queue(struct gendisk *disk);
>   extern blk_qc_t generic_make_request(struct bio *bio);
>   extern blk_qc_t direct_make_request(struct bio *bio);
>   extern void blk_rq_init(struct request_queue *q, struct request *rq);
> -extern void blk_init_request_from_bio(struct request *req, struct bio *bio);
>   extern void blk_put_request(struct request *);
>   extern struct request *blk_get_request(struct request_queue *, unsigned int op,
>   				       blk_mq_req_flags_t flags);
> 

Thanks Christoph.

Reviewed-by: Matias Bjørling <mb@lightnvm.io>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/6] block: remove the bi_phys_segments field in struct bio
  2019-06-07  6:02   ` Hannes Reinecke
  2019-06-07 16:45     ` Christoph Hellwig
@ 2019-06-09 22:17     ` Nikolay Borisov
  1 sibling, 0 replies; 32+ messages in thread
From: Nikolay Borisov @ 2019-06-09 22:17 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig, axboe; +Cc: Matias Bjorling, linux-block



On 7.06.19 г. 9:02 ч., Hannes Reinecke wrote:
> On 6/6/19 12:29 PM, Christoph Hellwig wrote:
>> We only need the number of segments in the blk-mq submission path.
>> Remove the field from struct bio, and return it from a variant of
>> blk_queue_split instead of that it can passed as an argument to
>> those functions that need the value.
>>
>> This also means we stop recounting segments except for cloning
>> and partial segments.
>>
>> To keep the number of arguments in this how path down remove
>> pointless struct request_queue arguments from any of the functions
>> that had it and grew a nr_segs argument.
>>
>> Signed-off-by: Christoph Hellwig <hch@lst.de>
>> ---
>>  Documentation/block/biodoc.txt |  1 -
>>  block/bfq-iosched.c            |  5 ++-
>>  block/bio.c                    | 15 +------
>>  block/blk-core.c               | 32 +++++++--------
>>  block/blk-map.c                | 10 ++++-
>>  block/blk-merge.c              | 75 ++++++++++++----------------------
>>  block/blk-mq-sched.c           | 26 +++++++-----
>>  block/blk-mq-sched.h           | 10 +++--
>>  block/blk-mq.c                 | 23 ++++++-----
>>  block/blk.h                    | 23 ++++++-----
>>  block/kyber-iosched.c          |  5 ++-
>>  block/mq-deadline.c            |  5 ++-
>>  drivers/md/raid5.c             |  1 -
>>  include/linux/bio.h            |  1 -
>>  include/linux/blk-mq.h         |  2 +-
>>  include/linux/blk_types.h      |  6 ---
>>  include/linux/blkdev.h         |  1 -
>>  include/linux/elevator.h       |  2 +-
>>  18 files changed, 106 insertions(+), 137 deletions(-)
>>
> In general a very good idea, but:
> 
>> @@ -304,6 +301,13 @@ void blk_queue_split(struct request_queue *q, struct bio **bio)
>>  		*bio = split;
>>  	}
>>  }
>> +
>> +void blk_queue_split(struct request_queue *q, struct bio **bio)
>> +{
>> +	unsigned int nr_segs;
>> +
>> +	__blk_queue_split(q, bio, &nr_segs);
>> +}
>>  EXPORT_SYMBOL(blk_queue_split);
>>  
> That looks a bit weird, and I guess some or other compiler might
> complain here about nr_segs being unused.
> Can't we modify __blk_queue_split() to accept a NULL argument here?

How about __maybe_unused so the compiler doesn't complain?

> 
> Cheers,
> 
> Hannes
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/6] block: remove the bi_phys_segments field in struct bio
  2019-06-06 10:29 ` [PATCH 3/6] block: remove the bi_phys_segments field in struct bio Christoph Hellwig
  2019-06-07  6:02   ` Hannes Reinecke
@ 2019-06-10 18:30   ` Bart Van Assche
  1 sibling, 0 replies; 32+ messages in thread
From: Bart Van Assche @ 2019-06-10 18:30 UTC (permalink / raw)
  To: Christoph Hellwig, axboe; +Cc: Matias Bjorling, linux-block

On 6/6/19 3:29 AM, Christoph Hellwig wrote:
> We only need the number of segments in the blk-mq submission path.
> Remove the field from struct bio, and return it from a variant of
> blk_queue_split instead of that it can passed as an argument to
> those functions that need the value.
> 
> This also means we stop recounting segments except for cloning
> and partial segments.
> 
> To keep the number of arguments in this how path down remove
                                           ^^^
                                           hot?

I will have a further look at this patch series as soon as I have the time.

Bart.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: remove bi_phys_segments and related cleanups
  2019-06-06 10:28 remove bi_phys_segments and related cleanups Christoph Hellwig
                   ` (5 preceding siblings ...)
  2019-06-06 10:29 ` [PATCH 6/6] block: mark blk_rq_bio_prep as inline Christoph Hellwig
@ 2019-06-20  9:44 ` Christoph Hellwig
  2019-06-20 16:33 ` Jens Axboe
  7 siblings, 0 replies; 32+ messages in thread
From: Christoph Hellwig @ 2019-06-20  9:44 UTC (permalink / raw)
  To: axboe; +Cc: Matias Bjorling, linux-block

ping?

On Thu, Jun 06, 2019 at 12:28:58PM +0200, Christoph Hellwig wrote:
> Hi Jens,
> 
> this series removes the bi_phys_segments member in struct bio
> and cleans up various areas around it.
---end quoted text---

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 1/6] block: initialize the write priority in blk_rq_bio_prep
  2019-06-06 10:28 ` [PATCH 1/6] block: initialize the write priority in blk_rq_bio_prep Christoph Hellwig
  2019-06-07  5:58   ` Hannes Reinecke
  2019-06-07 12:20   ` Minwoo Im
@ 2019-06-20 13:40   ` Minwoo Im
  2 siblings, 0 replies; 32+ messages in thread
From: Minwoo Im @ 2019-06-20 13:40 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: axboe, Matias Bjorling, linux-block, Chaitanya Kulkarni

Looks good to me"

Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: remove bi_phys_segments and related cleanups
  2019-06-06 10:28 remove bi_phys_segments and related cleanups Christoph Hellwig
                   ` (6 preceding siblings ...)
  2019-06-20  9:44 ` remove bi_phys_segments and related cleanups Christoph Hellwig
@ 2019-06-20 16:33 ` Jens Axboe
  2019-07-01 16:46   ` Jens Axboe
  7 siblings, 1 reply; 32+ messages in thread
From: Jens Axboe @ 2019-06-20 16:33 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Matias Bjorling, linux-block

On 6/6/19 4:28 AM, Christoph Hellwig wrote:
> Hi Jens,
> 
> this series removes the bi_phys_segments member in struct bio
> and cleans up various areas around it.

Applied, thanks.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: remove bi_phys_segments and related cleanups
  2019-06-20 16:33 ` Jens Axboe
@ 2019-07-01 16:46   ` Jens Axboe
  2019-07-02 13:34     ` Christoph Hellwig
  0 siblings, 1 reply; 32+ messages in thread
From: Jens Axboe @ 2019-07-01 16:46 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Matias Bjorling, linux-block

On 6/20/19 10:32 AM, Jens Axboe wrote:
> On 6/6/19 4:28 AM, Christoph Hellwig wrote:
>> Hi Jens,
>>
>> this series removes the bi_phys_segments member in struct bio
>> and cleans up various areas around it.> 
> Applied, thanks.

Now that I'm back and can fully go through testing, I run into
problems with this series. On one system, booting up makes it
crash. I've managed to get it to boot, only for the first sync
to make it crash. So seems likely related to a flush.

No console on that system, but that's OK since the trace itself
isn't that interesting. We end up crashing in nvme_queue_rq(),
here:

(gdb) l *nvme_queue_rq+0x252
0xffffffff8148b292 is in nvme_queue_rq (./include/linux/blkdev.h:962).
957	 */
958	static inline struct bio_vec req_bvec(struct request *rq)
959	{
960		if (rq->rq_flags & RQF_SPECIAL_PAYLOAD)
961			return rq->special_vec;
962		return mp_bvec_iter_bvec(rq->bio->bi_io_vec, rq->bio->bi_iter);
963	}

with a NULL pointer dereference at 0x8.

Taking a look at the series, this is the offending commit:

commit 14ccb66b3f585b2bc21e7256c96090abed5a512c
Author: Christoph Hellwig <hch@lst.de>
Date:   Thu Jun 6 12:29:01 2019 +0200

    block: remove the bi_phys_segments field in struct bio

with that as the HEAD, crash. Go back one commit, works fine.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: remove bi_phys_segments and related cleanups
  2019-07-01 16:46   ` Jens Axboe
@ 2019-07-02 13:34     ` Christoph Hellwig
       [not found]       ` <bfe8a4b5-901e-5ac4-e11c-0e6ccc4faec2@kernel.dk>
  0 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2019-07-02 13:34 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Christoph Hellwig, Matias Bjorling, linux-block

On Mon, Jul 01, 2019 at 10:46:12AM -0600, Jens Axboe wrote:
> No console on that system, but that's OK since the trace itself
> isn't that interesting. We end up crashing in nvme_queue_rq(),
> here:

Weird.  Is that a nvme device with our without a volativle write
cache?  Can you send me your .config?

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: remove bi_phys_segments and related cleanups
       [not found]       ` <bfe8a4b5-901e-5ac4-e11c-0e6ccc4faec2@kernel.dk>
@ 2019-07-02 18:29         ` Christoph Hellwig
  2019-07-02 18:37           ` Jens Axboe
  0 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2019-07-02 18:29 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Christoph Hellwig, Matias Bjorling, linux-block

On Tue, Jul 02, 2019 at 07:39:52AM -0600, Jens Axboe wrote:
> .config attached.

I couldn't get that to boot in my test systems even with mainline,
but that seems to be do to systemd waiting for some crap not supported
in the config.

But with my usual test config I've just completed a test run with
KASAN enabled on a VWC=1 driver with no issues, so this keeps puzzling
me.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: remove bi_phys_segments and related cleanups
  2019-07-02 18:29         ` Christoph Hellwig
@ 2019-07-02 18:37           ` Jens Axboe
  2019-07-03  0:00             ` Christoph Hellwig
  0 siblings, 1 reply; 32+ messages in thread
From: Jens Axboe @ 2019-07-02 18:37 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Matias Bjorling, linux-block

On 7/2/19 12:29 PM, Christoph Hellwig wrote:
> On Tue, Jul 02, 2019 at 07:39:52AM -0600, Jens Axboe wrote:
>> .config attached.
> 
> I couldn't get that to boot in my test systems even with mainline,
> but that seems to be do to systemd waiting for some crap not supported
> in the config.
> 
> But with my usual test config I've just completed a test run with
> KASAN enabled on a VWC=1 driver with no issues, so this keeps puzzling
> me.

Let me know what you want me to try. I can't reproduce it in qemu, but
it's 100% on my laptop. My qemu drives also have VWC=1, so it's not
(just) that.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: remove bi_phys_segments and related cleanups
  2019-07-02 18:37           ` Jens Axboe
@ 2019-07-03  0:00             ` Christoph Hellwig
  2019-07-03  1:10               ` Jens Axboe
  0 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2019-07-03  0:00 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Christoph Hellwig, Matias Bjorling, linux-block

On Tue, Jul 02, 2019 at 12:37:59PM -0600, Jens Axboe wrote:
> > I couldn't get that to boot in my test systems even with mainline,
> > but that seems to be do to systemd waiting for some crap not supported
> > in the config.
> > 
> > But with my usual test config I've just completed a test run with
> > KASAN enabled on a VWC=1 driver with no issues, so this keeps puzzling
> > me.
> 
> Let me know what you want me to try. I can't reproduce it in qemu, but
> it's 100% on my laptop. My qemu drives also have VWC=1, so it's not
> (just) that.

I seriously have no idea unfortunately.  It works fine for me both
on qemu and on a real WD SN720 drive on my laptop.  Just for curiosity
you could try to pad the bio structure and see if bloating it to the
old size makes any difference.

The other things that comes to mind is that when Johannes removed
BIO_SEG_VALID there also were some unexplainable side effects,
I'll look into seeing if there was any similarity.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: remove bi_phys_segments and related cleanups
  2019-07-03  0:00             ` Christoph Hellwig
@ 2019-07-03  1:10               ` Jens Axboe
  2019-07-03  1:32                 ` Jens Axboe
  0 siblings, 1 reply; 32+ messages in thread
From: Jens Axboe @ 2019-07-03  1:10 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Matias Bjorling, linux-block

On 7/2/19 6:00 PM, Christoph Hellwig wrote:
> On Tue, Jul 02, 2019 at 12:37:59PM -0600, Jens Axboe wrote:
>>> I couldn't get that to boot in my test systems even with mainline,
>>> but that seems to be do to systemd waiting for some crap not supported
>>> in the config.
>>>
>>> But with my usual test config I've just completed a test run with
>>> KASAN enabled on a VWC=1 driver with no issues, so this keeps puzzling
>>> me.
>>
>> Let me know what you want me to try. I can't reproduce it in qemu, but
>> it's 100% on my laptop. My qemu drives also have VWC=1, so it's not
>> (just) that.
> 
> I seriously have no idea unfortunately.  It works fine for me both
> on qemu and on a real WD SN720 drive on my laptop.  Just for curiosity
> you could try to pad the bio structure and see if bloating it to the
> old size makes any difference.

No change with the padding, put it in the same place. Still insta crash
before I get to login prompt, or right after I login and run sync.

> The other things that comes to mind is that when Johannes removed
> BIO_SEG_VALID there also were some unexplainable side effects,
> I'll look into seeing if there was any similarity.

Do you have a link?

I'll try and poke a bit here.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: remove bi_phys_segments and related cleanups
  2019-07-03  1:10               ` Jens Axboe
@ 2019-07-03  1:32                 ` Jens Axboe
  2019-07-03  1:35                   ` Christoph Hellwig
  0 siblings, 1 reply; 32+ messages in thread
From: Jens Axboe @ 2019-07-03  1:32 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Matias Bjorling, linux-block

On 7/2/19 7:10 PM, Jens Axboe wrote:
> On 7/2/19 6:00 PM, Christoph Hellwig wrote:
>> On Tue, Jul 02, 2019 at 12:37:59PM -0600, Jens Axboe wrote:
>>>> I couldn't get that to boot in my test systems even with mainline,
>>>> but that seems to be do to systemd waiting for some crap not supported
>>>> in the config.
>>>>
>>>> But with my usual test config I've just completed a test run with
>>>> KASAN enabled on a VWC=1 driver with no issues, so this keeps puzzling
>>>> me.
>>>
>>> Let me know what you want me to try. I can't reproduce it in qemu, but
>>> it's 100% on my laptop. My qemu drives also have VWC=1, so it's not
>>> (just) that.
>>
>> I seriously have no idea unfortunately.  It works fine for me both
>> on qemu and on a real WD SN720 drive on my laptop.  Just for curiosity
>> you could try to pad the bio structure and see if bloating it to the
>> old size makes any difference.
> 
> No change with the padding, put it in the same place. Still insta crash
> before I get to login prompt, or right after I login and run sync.
> 
>> The other things that comes to mind is that when Johannes removed
>> BIO_SEG_VALID there also were some unexplainable side effects,
>> I'll look into seeing if there was any similarity.
> 
> Do you have a link?
> 
> I'll try and poke a bit here.

The issue is a discard request, 4kb, with a bio but no bi_io_vec.
The sync is a red herring, apart from the fact that it triggers
the ext4 discards.

I'm guessing, when you tried to reproduce, that you didn't have discard
enabled?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: remove bi_phys_segments and related cleanups
  2019-07-03  1:32                 ` Jens Axboe
@ 2019-07-03  1:35                   ` Christoph Hellwig
  2019-07-03 12:16                     ` Christoph Hellwig
  0 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2019-07-03  1:35 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Christoph Hellwig, Matias Bjorling, linux-block

On Tue, Jul 02, 2019 at 07:32:40PM -0600, Jens Axboe wrote:
> The issue is a discard request, 4kb, with a bio but no bi_io_vec.
> The sync is a red herring, apart from the fact that it triggers
> the ext4 discards.
> 
> I'm guessing, when you tried to reproduce, that you didn't have discard
> enabled?

I didn't run with online discard, but I tried FITRIM (with XFS).

I'll try looking into ext4.  I remember we had some weird one offs in that
area for the different ways the physical segments are calculated, so this
makes sense.  Off to dinner now, but I'll look into it tomorrow.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: remove bi_phys_segments and related cleanups
  2019-07-03  1:35                   ` Christoph Hellwig
@ 2019-07-03 12:16                     ` Christoph Hellwig
  0 siblings, 0 replies; 32+ messages in thread
From: Christoph Hellwig @ 2019-07-03 12:16 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Christoph Hellwig, Matias Bjorling, linux-block

On Wed, Jul 03, 2019 at 03:35:36AM +0200, Christoph Hellwig wrote:
> I didn't run with online discard, but I tried FITRIM (with XFS).
> 
> I'll try looking into ext4.  I remember we had some weird one offs in that
> area for the different ways the physical segments are calculated, so this
> makes sense.  Off to dinner now, but I'll look into it tomorrow.

ext4 reproduced the bug instantly, but it was Write Zeroes, not Discard.
Which explains the whole thing as ext4 is the only major user of Write
Zeroes.  A oatch will be on its way shortly.

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2019-07-03 12:16 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-06 10:28 remove bi_phys_segments and related cleanups Christoph Hellwig
2019-06-06 10:28 ` [PATCH 1/6] block: initialize the write priority in blk_rq_bio_prep Christoph Hellwig
2019-06-07  5:58   ` Hannes Reinecke
2019-06-07 12:20   ` Minwoo Im
2019-06-20 13:40   ` Minwoo Im
2019-06-06 10:29 ` [PATCH 2/6] block: remove blk_init_request_from_bio Christoph Hellwig
2019-06-07  5:59   ` Hannes Reinecke
2019-06-07 12:23   ` Minwoo Im
2019-06-07 12:57   ` Javier González
2019-06-09 19:51   ` Matias Bjørling
2019-06-06 10:29 ` [PATCH 3/6] block: remove the bi_phys_segments field in struct bio Christoph Hellwig
2019-06-07  6:02   ` Hannes Reinecke
2019-06-07 16:45     ` Christoph Hellwig
2019-06-09 22:17     ` Nikolay Borisov
2019-06-10 18:30   ` Bart Van Assche
2019-06-06 10:29 ` [PATCH 4/6] block: simplify blk_recalc_rq_segments Christoph Hellwig
2019-06-07  6:03   ` Hannes Reinecke
2019-06-06 10:29 ` [PATCH 5/6] block: untangle the end of blk_bio_segment_split Christoph Hellwig
2019-06-07  6:04   ` Hannes Reinecke
2019-06-06 10:29 ` [PATCH 6/6] block: mark blk_rq_bio_prep as inline Christoph Hellwig
2019-06-07  6:04   ` Hannes Reinecke
2019-06-20  9:44 ` remove bi_phys_segments and related cleanups Christoph Hellwig
2019-06-20 16:33 ` Jens Axboe
2019-07-01 16:46   ` Jens Axboe
2019-07-02 13:34     ` Christoph Hellwig
     [not found]       ` <bfe8a4b5-901e-5ac4-e11c-0e6ccc4faec2@kernel.dk>
2019-07-02 18:29         ` Christoph Hellwig
2019-07-02 18:37           ` Jens Axboe
2019-07-03  0:00             ` Christoph Hellwig
2019-07-03  1:10               ` Jens Axboe
2019-07-03  1:32                 ` Jens Axboe
2019-07-03  1:35                   ` Christoph Hellwig
2019-07-03 12:16                     ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).