All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, Theodore Ts'o <tytso@mit.edu>,
	Omar Sandoval <osandov@fb.com>, Sagi Grimberg <sagi@grimberg.me>,
	Dave Chinner <dchinner@redhat.com>,
	Kent Overstreet <kent.overstreet@gmail.com>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Alexander Viro <viro@zeniv.linux.org.uk>,
	linux-fsdevel@vger.kernel.org, linux-raid@vger.kernel.org,
	David Sterba <dsterba@suse.com>,
	linux-btrfs@vger.kernel.org,
	"Darrick J . Wong" <darrick.wong@oracle.com>,
	linux-xfs@vger.kernel.org, Gao Xiang <gaoxiang25@huawei.com>,
	Christoph Hellwig <hch@lst.de>,
	linux-ext4@vger.kernel.org, Coly Li <colyli@suse.de>,
	linux-bcache@vger.kernel.org, Boaz Harrosh <ooo@electrozaur.com>,
	Bob Peterson <rpeterso@redhat.com>,
	clus
Subject: [PATCH V14 07/18] block: use bio_for_each_mp_bvec() to map sg
Date: Mon, 21 Jan 2019 16:17:54 +0800	[thread overview]
Message-ID: <20190121081805.32727-8-ming.lei@redhat.com> (raw)
In-Reply-To: <20190121081805.32727-1-ming.lei@redhat.com>

It is more efficient to use bio_for_each_mp_bvec() to map sg, meantime
we have to consider splitting multipage bvec as done in blk_bio_segment_split().

Reviewed-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-merge.c | 70 +++++++++++++++++++++++++++++++++++++++----------------
 1 file changed, 50 insertions(+), 20 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 2dfc30d8bc77..8a498f29636f 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -464,6 +464,54 @@ static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio,
 	return biovec_phys_mergeable(q, &end_bv, &nxt_bv);
 }
 
+static struct scatterlist *blk_next_sg(struct scatterlist **sg,
+		struct scatterlist *sglist)
+{
+	if (!*sg)
+		return sglist;
+
+	/*
+	 * If the driver previously mapped a shorter list, we could see a
+	 * termination bit prematurely unless it fully inits the sg table
+	 * on each mapping. We KNOW that there must be more entries here
+	 * or the driver would be buggy, so force clear the termination bit
+	 * to avoid doing a full sg_init_table() in drivers for each command.
+	 */
+	sg_unmark_end(*sg);
+	return sg_next(*sg);
+}
+
+static unsigned blk_bvec_map_sg(struct request_queue *q,
+		struct bio_vec *bvec, struct scatterlist *sglist,
+		struct scatterlist **sg)
+{
+	unsigned nbytes = bvec->bv_len;
+	unsigned nsegs = 0, total = 0, offset = 0;
+
+	while (nbytes > 0) {
+		unsigned seg_size;
+		struct page *pg;
+		unsigned idx;
+
+		*sg = blk_next_sg(sg, sglist);
+
+		seg_size = get_max_segment_size(q, bvec->bv_offset + total);
+		seg_size = min(nbytes, seg_size);
+
+		offset = (total + bvec->bv_offset) % PAGE_SIZE;
+		idx = (total + bvec->bv_offset) / PAGE_SIZE;
+		pg = nth_page(bvec->bv_page, idx);
+
+		sg_set_page(*sg, pg, seg_size, offset);
+
+		total += seg_size;
+		nbytes -= seg_size;
+		nsegs++;
+	}
+
+	return nsegs;
+}
+
 static inline void
 __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec,
 		     struct scatterlist *sglist, struct bio_vec *bvprv,
@@ -481,25 +529,7 @@ __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec,
 		(*sg)->length += nbytes;
 	} else {
 new_segment:
-		if (!*sg)
-			*sg = sglist;
-		else {
-			/*
-			 * If the driver previously mapped a shorter
-			 * list, we could see a termination bit
-			 * prematurely unless it fully inits the sg
-			 * table on each mapping. We KNOW that there
-			 * must be more entries here or the driver
-			 * would be buggy, so force clear the
-			 * termination bit to avoid doing a full
-			 * sg_init_table() in drivers for each command.
-			 */
-			sg_unmark_end(*sg);
-			*sg = sg_next(*sg);
-		}
-
-		sg_set_page(*sg, bvec->bv_page, nbytes, bvec->bv_offset);
-		(*nsegs)++;
+		(*nsegs) += blk_bvec_map_sg(q, bvec, sglist, sg);
 	}
 	*bvprv = *bvec;
 }
@@ -521,7 +551,7 @@ static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio,
 	int nsegs = 0;
 
 	for_each_bio(bio)
-		bio_for_each_segment(bvec, bio, iter)
+		bio_for_each_mp_bvec(bvec, bio, iter)
 			__blk_segment_map_sg(q, &bvec, sglist, &bvprv, sg,
 					     &nsegs);
 
-- 
2.9.5

WARNING: multiple messages have this Message-ID (diff)
From: Ming Lei <ming.lei@redhat.com>
To: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, Theodore Ts'o <tytso@mit.edu>,
	Omar Sandoval <osandov@fb.com>, Sagi Grimberg <sagi@grimberg.me>,
	Dave Chinner <dchinner@redhat.com>,
	Kent Overstreet <kent.overstreet@gmail.com>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Alexander Viro <viro@zeniv.linux.org.uk>,
	linux-fsdevel@vger.kernel.org, linux-raid@vger.kernel.org,
	David Sterba <dsterba@suse.com>,
	linux-btrfs@vger.kernel.org,
	"Darrick J . Wong" <darrick.wong@oracle.com>,
	linux-xfs@vger.kernel.org, Gao Xiang <gaoxiang25@huawei.com>,
	Christoph Hellwig <hch@lst.de>,
	linux-ext4@vger.kernel.org, Coly Li <colyli@suse.de>,
	linux-bcache@vger.kernel.org, Boaz Harrosh <ooo@electrozaur.com>,
	Bob Peterson <rpeterso@redhat.com>,
	cluster-devel@redhat.com, Ming Lei <ming.lei@redhat.com>
Subject: [PATCH V14 07/18] block: use bio_for_each_mp_bvec() to map sg
Date: Mon, 21 Jan 2019 16:17:54 +0800	[thread overview]
Message-ID: <20190121081805.32727-8-ming.lei@redhat.com> (raw)
In-Reply-To: <20190121081805.32727-1-ming.lei@redhat.com>

It is more efficient to use bio_for_each_mp_bvec() to map sg, meantime
we have to consider splitting multipage bvec as done in blk_bio_segment_split().

Reviewed-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-merge.c | 70 +++++++++++++++++++++++++++++++++++++++----------------
 1 file changed, 50 insertions(+), 20 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 2dfc30d8bc77..8a498f29636f 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -464,6 +464,54 @@ static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio,
 	return biovec_phys_mergeable(q, &end_bv, &nxt_bv);
 }
 
+static struct scatterlist *blk_next_sg(struct scatterlist **sg,
+		struct scatterlist *sglist)
+{
+	if (!*sg)
+		return sglist;
+
+	/*
+	 * If the driver previously mapped a shorter list, we could see a
+	 * termination bit prematurely unless it fully inits the sg table
+	 * on each mapping. We KNOW that there must be more entries here
+	 * or the driver would be buggy, so force clear the termination bit
+	 * to avoid doing a full sg_init_table() in drivers for each command.
+	 */
+	sg_unmark_end(*sg);
+	return sg_next(*sg);
+}
+
+static unsigned blk_bvec_map_sg(struct request_queue *q,
+		struct bio_vec *bvec, struct scatterlist *sglist,
+		struct scatterlist **sg)
+{
+	unsigned nbytes = bvec->bv_len;
+	unsigned nsegs = 0, total = 0, offset = 0;
+
+	while (nbytes > 0) {
+		unsigned seg_size;
+		struct page *pg;
+		unsigned idx;
+
+		*sg = blk_next_sg(sg, sglist);
+
+		seg_size = get_max_segment_size(q, bvec->bv_offset + total);
+		seg_size = min(nbytes, seg_size);
+
+		offset = (total + bvec->bv_offset) % PAGE_SIZE;
+		idx = (total + bvec->bv_offset) / PAGE_SIZE;
+		pg = nth_page(bvec->bv_page, idx);
+
+		sg_set_page(*sg, pg, seg_size, offset);
+
+		total += seg_size;
+		nbytes -= seg_size;
+		nsegs++;
+	}
+
+	return nsegs;
+}
+
 static inline void
 __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec,
 		     struct scatterlist *sglist, struct bio_vec *bvprv,
@@ -481,25 +529,7 @@ __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec,
 		(*sg)->length += nbytes;
 	} else {
 new_segment:
-		if (!*sg)
-			*sg = sglist;
-		else {
-			/*
-			 * If the driver previously mapped a shorter
-			 * list, we could see a termination bit
-			 * prematurely unless it fully inits the sg
-			 * table on each mapping. We KNOW that there
-			 * must be more entries here or the driver
-			 * would be buggy, so force clear the
-			 * termination bit to avoid doing a full
-			 * sg_init_table() in drivers for each command.
-			 */
-			sg_unmark_end(*sg);
-			*sg = sg_next(*sg);
-		}
-
-		sg_set_page(*sg, bvec->bv_page, nbytes, bvec->bv_offset);
-		(*nsegs)++;
+		(*nsegs) += blk_bvec_map_sg(q, bvec, sglist, sg);
 	}
 	*bvprv = *bvec;
 }
@@ -521,7 +551,7 @@ static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio,
 	int nsegs = 0;
 
 	for_each_bio(bio)
-		bio_for_each_segment(bvec, bio, iter)
+		bio_for_each_mp_bvec(bvec, bio, iter)
 			__blk_segment_map_sg(q, &bvec, sglist, &bvprv, sg,
 					     &nsegs);
 
-- 
2.9.5


WARNING: multiple messages have this Message-ID (diff)
From: Ming Lei <ming.lei@redhat.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] [PATCH V14 07/18] block: use bio_for_each_mp_bvec() to map sg
Date: Mon, 21 Jan 2019 16:17:54 +0800	[thread overview]
Message-ID: <20190121081805.32727-8-ming.lei@redhat.com> (raw)
In-Reply-To: <20190121081805.32727-1-ming.lei@redhat.com>

It is more efficient to use bio_for_each_mp_bvec() to map sg, meantime
we have to consider splitting multipage bvec as done in blk_bio_segment_split().

Reviewed-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-merge.c | 70 +++++++++++++++++++++++++++++++++++++++----------------
 1 file changed, 50 insertions(+), 20 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 2dfc30d8bc77..8a498f29636f 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -464,6 +464,54 @@ static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio,
 	return biovec_phys_mergeable(q, &end_bv, &nxt_bv);
 }
 
+static struct scatterlist *blk_next_sg(struct scatterlist **sg,
+		struct scatterlist *sglist)
+{
+	if (!*sg)
+		return sglist;
+
+	/*
+	 * If the driver previously mapped a shorter list, we could see a
+	 * termination bit prematurely unless it fully inits the sg table
+	 * on each mapping. We KNOW that there must be more entries here
+	 * or the driver would be buggy, so force clear the termination bit
+	 * to avoid doing a full sg_init_table() in drivers for each command.
+	 */
+	sg_unmark_end(*sg);
+	return sg_next(*sg);
+}
+
+static unsigned blk_bvec_map_sg(struct request_queue *q,
+		struct bio_vec *bvec, struct scatterlist *sglist,
+		struct scatterlist **sg)
+{
+	unsigned nbytes = bvec->bv_len;
+	unsigned nsegs = 0, total = 0, offset = 0;
+
+	while (nbytes > 0) {
+		unsigned seg_size;
+		struct page *pg;
+		unsigned idx;
+
+		*sg = blk_next_sg(sg, sglist);
+
+		seg_size = get_max_segment_size(q, bvec->bv_offset + total);
+		seg_size = min(nbytes, seg_size);
+
+		offset = (total + bvec->bv_offset) % PAGE_SIZE;
+		idx = (total + bvec->bv_offset) / PAGE_SIZE;
+		pg = nth_page(bvec->bv_page, idx);
+
+		sg_set_page(*sg, pg, seg_size, offset);
+
+		total += seg_size;
+		nbytes -= seg_size;
+		nsegs++;
+	}
+
+	return nsegs;
+}
+
 static inline void
 __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec,
 		     struct scatterlist *sglist, struct bio_vec *bvprv,
@@ -481,25 +529,7 @@ __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec,
 		(*sg)->length += nbytes;
 	} else {
 new_segment:
-		if (!*sg)
-			*sg = sglist;
-		else {
-			/*
-			 * If the driver previously mapped a shorter
-			 * list, we could see a termination bit
-			 * prematurely unless it fully inits the sg
-			 * table on each mapping. We KNOW that there
-			 * must be more entries here or the driver
-			 * would be buggy, so force clear the
-			 * termination bit to avoid doing a full
-			 * sg_init_table() in drivers for each command.
-			 */
-			sg_unmark_end(*sg);
-			*sg = sg_next(*sg);
-		}
-
-		sg_set_page(*sg, bvec->bv_page, nbytes, bvec->bv_offset);
-		(*nsegs)++;
+		(*nsegs) += blk_bvec_map_sg(q, bvec, sglist, sg);
 	}
 	*bvprv = *bvec;
 }
@@ -521,7 +551,7 @@ static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio,
 	int nsegs = 0;
 
 	for_each_bio(bio)
-		bio_for_each_segment(bvec, bio, iter)
+		bio_for_each_mp_bvec(bvec, bio, iter)
 			__blk_segment_map_sg(q, &bvec, sglist, &bvprv, sg,
 					     &nsegs);
 
-- 
2.9.5



  parent reply	other threads:[~2019-01-21  8:17 UTC|newest]

Thread overview: 74+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-21  8:17 [PATCH V14 00/18] block: support multi-page bvec Ming Lei
2019-01-21  8:17 ` [Cluster-devel] " Ming Lei
2019-01-21  8:17 ` Ming Lei
2019-01-21  8:17 ` [PATCH V14 01/18] btrfs: look at bi_size for repair decisions Ming Lei
2019-01-21  8:17   ` [Cluster-devel] " Ming Lei
2019-01-21  8:17   ` Ming Lei
2019-01-21  8:17 ` [PATCH V14 02/18] block: don't use bio->bi_vcnt to figure out segment number Ming Lei
2019-01-21  8:17   ` [Cluster-devel] " Ming Lei
2019-01-21  8:17   ` Ming Lei
2019-01-21  8:17 ` [PATCH V14 03/18] block: remove bvec_iter_rewind() Ming Lei
2019-01-21  8:17   ` [Cluster-devel] " Ming Lei
2019-01-21  8:17   ` Ming Lei
2019-01-21  8:17 ` [PATCH V14 04/18] block: introduce multi-page bvec helpers Ming Lei
2019-01-21  8:17   ` [Cluster-devel] " Ming Lei
2019-01-21  8:17   ` Ming Lei
2019-01-21  8:17 ` [PATCH V14 05/18] block: introduce bio_for_each_mp_bvec() and rq_for_each_mp_bvec() Ming Lei
2019-01-21  8:17   ` [Cluster-devel] " Ming Lei
2019-01-21  8:17   ` Ming Lei
2019-01-21  8:17 ` [PATCH V14 06/18] block: use bio_for_each_mp_bvec() to compute multi-page bvec count Ming Lei
2019-01-21  8:17   ` [Cluster-devel] " Ming Lei
2019-01-21  8:17   ` Ming Lei
2019-01-21  8:17 ` Ming Lei [this message]
2019-01-21  8:17   ` [Cluster-devel] [PATCH V14 07/18] block: use bio_for_each_mp_bvec() to map sg Ming Lei
2019-01-21  8:17   ` Ming Lei
2019-01-21  8:17 ` [PATCH V14 08/18] block: introduce mp_bvec_last_segment() Ming Lei
2019-01-21  8:17   ` [Cluster-devel] " Ming Lei
2019-01-21  8:17   ` Ming Lei
2019-01-21  8:17 ` [PATCH V14 09/18] fs/buffer.c: use bvec iterator to truncate the bio Ming Lei
2019-01-21  8:17   ` [Cluster-devel] " Ming Lei
2019-01-21  8:17   ` Ming Lei
2019-01-21  8:17 ` [PATCH V14 10/18] btrfs: use mp_bvec_last_segment to get bio's last page Ming Lei
2019-01-21  8:17   ` [Cluster-devel] " Ming Lei
2019-01-21  8:17   ` Ming Lei
2019-01-21  8:17 ` [PATCH V14 11/18] block: loop: pass multi-page bvec to iov_iter Ming Lei
2019-01-21  8:17   ` [Cluster-devel] " Ming Lei
2019-01-21  8:17   ` Ming Lei
2019-01-21  8:17 ` [PATCH V14 12/18] bcache: avoid to use bio_for_each_segment_all() in bch_bio_alloc_pages() Ming Lei
2019-01-21  8:17   ` [Cluster-devel] " Ming Lei
2019-01-21  8:17   ` Ming Lei
2019-01-21  8:18 ` [PATCH V14 13/18] block: allow bio_for_each_segment_all() to iterate over multi-page bvec Ming Lei
2019-01-21  8:18   ` [Cluster-devel] " Ming Lei
2019-01-21  8:18   ` Ming Lei
2019-01-21  8:18 ` [PATCH V14 14/18] block: enable multipage bvecs Ming Lei
2019-01-21  8:18   ` [Cluster-devel] " Ming Lei
2019-01-21  8:18   ` Ming Lei
2019-01-21  8:18 ` [PATCH V14 15/18] block: always define BIO_MAX_PAGES as 256 Ming Lei
2019-01-21  8:18   ` [Cluster-devel] " Ming Lei
2019-01-21  8:18   ` Ming Lei
2019-01-21  8:18 ` [PATCH V14 16/18] block: document usage of bio iterator helpers Ming Lei
2019-01-21  8:18   ` [Cluster-devel] " Ming Lei
2019-01-21  8:18   ` Ming Lei
2019-01-21  8:18 ` [PATCH V14 17/18] block: kill QUEUE_FLAG_NO_SG_MERGE Ming Lei
2019-01-21  8:18   ` [Cluster-devel] " Ming Lei
2019-01-21  8:18   ` Ming Lei
2019-01-21  8:18 ` [PATCH V14 18/18] block: kill BLK_MQ_F_SG_MERGE Ming Lei
2019-01-21  8:18   ` [Cluster-devel] " Ming Lei
2019-01-21  8:18   ` Ming Lei
2019-01-21  8:22 ` [PATCH V14 00/18] block: support multi-page bvec Christoph Hellwig
2019-01-21  8:22   ` [Cluster-devel] " Christoph Hellwig
2019-01-21  8:22   ` Christoph Hellwig
2019-01-21  8:37   ` Ming Lei
2019-01-21  8:37     ` [Cluster-devel] " Ming Lei
2019-01-21  8:37     ` Ming Lei
2019-01-21  8:38     ` Christoph Hellwig
2019-01-21  8:38       ` [Cluster-devel] " Christoph Hellwig
2019-01-21  8:38       ` Christoph Hellwig
2019-01-21  8:40       ` Ming Lei
2019-01-21  8:40         ` [Cluster-devel] " Ming Lei
2019-01-21  8:40         ` Ming Lei
2019-01-21  9:43 ` Sagi Grimberg
2019-01-21  9:43   ` [Cluster-devel] " Sagi Grimberg
2019-01-22  2:01   ` Ming Lei
2019-01-22  2:01     ` [Cluster-devel] " Ming Lei
2019-01-22  2:01     ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190121081805.32727-8-ming.lei@redhat.com \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=colyli@suse.de \
    --cc=darrick.wong@oracle.com \
    --cc=dchinner@redhat.com \
    --cc=dm-devel@redhat.com \
    --cc=dsterba@suse.com \
    --cc=gaoxiang25@huawei.com \
    --cc=hch@lst.de \
    --cc=kent.overstreet@gmail.com \
    --cc=linux-bcache@vger.kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=ooo@electrozaur.com \
    --cc=osandov@fb.com \
    --cc=rpeterso@redhat.com \
    --cc=sagi@grimberg.me \
    --cc=snitzer@redhat.com \
    --cc=tytso@mit.edu \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.