* [PATCH 0/4] block: optimize for single-page bvec workloads
@ 2019-02-27 12:40 Ming Lei
2019-02-27 12:40 ` [PATCH 1/4] block: introduce bvec_nth_page() Ming Lei
` (4 more replies)
0 siblings, 5 replies; 10+ messages in thread
From: Ming Lei @ 2019-02-27 12:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, Ming Lei
Hi,
The 1st patch introduce bvec_nth_page(), so that nth_page() can
be avoided if the bvec is single-page.
The 2nd and 3rd patch adds fast path for single-page bvec case.
The last patch introduces a light-weight helper for iterating over
pages, which may improve __bio_iov_bvec_add_pages(). This patch
is for io_uring.
Thanks,
Ming Lei (4):
block: introduce bvec_nth_page()
block: optimize __blk_segment_map_sg() for single-page bvec
block: optimize blk_bio_segment_split for single-page bvec
block: introduce mp_bvec_for_each_page() for iterating over page
block/bio.c | 7 +++----
block/blk-merge.c | 23 +++++++++++++++++------
include/linux/bvec.h | 16 +++++++++++++---
3 files changed, 33 insertions(+), 13 deletions(-)
--
2.9.5
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/4] block: introduce bvec_nth_page()
2019-02-27 12:40 [PATCH 0/4] block: optimize for single-page bvec workloads Ming Lei
@ 2019-02-27 12:40 ` Ming Lei
2019-02-27 12:40 ` [PATCH 2/4] block: optimize __blk_segment_map_sg() for single-page bvec Ming Lei
` (3 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: Ming Lei @ 2019-02-27 12:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, Ming Lei
Single-page bvec can often be seen in small BS workloads, so
introduce bvec_nth_page() for avoiding to call nth_page() unnecessarily,
which looks not cheap.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
block/blk-merge.c | 2 +-
include/linux/bvec.h | 11 ++++++++---
2 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 066b66430523..c7e8a8273460 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -483,7 +483,7 @@ static unsigned blk_bvec_map_sg(struct request_queue *q,
offset = (total + bvec->bv_offset) % PAGE_SIZE;
idx = (total + bvec->bv_offset) / PAGE_SIZE;
- pg = nth_page(bvec->bv_page, idx);
+ pg = bvec_nth_page(bvec->bv_page, idx);
sg_set_page(*sg, pg, seg_size, offset);
diff --git a/include/linux/bvec.h b/include/linux/bvec.h
index 30a57b68d017..4376f683c08a 100644
--- a/include/linux/bvec.h
+++ b/include/linux/bvec.h
@@ -51,6 +51,11 @@ struct bvec_iter_all {
unsigned done;
};
+static inline struct page *bvec_nth_page(struct page *page, int idx)
+{
+ return idx == 0 ? page : nth_page(page, idx);
+}
+
/*
* various member access, note that bio_data should of course not be used
* on highmem page vectors
@@ -87,8 +92,8 @@ struct bvec_iter_all {
PAGE_SIZE - bvec_iter_offset((bvec), (iter)))
#define bvec_iter_page(bvec, iter) \
- nth_page(mp_bvec_iter_page((bvec), (iter)), \
- mp_bvec_iter_page_idx((bvec), (iter)))
+ bvec_nth_page(mp_bvec_iter_page((bvec), (iter)), \
+ mp_bvec_iter_page_idx((bvec), (iter)))
#define bvec_iter_bvec(bvec, iter) \
((struct bio_vec) { \
@@ -171,7 +176,7 @@ static inline void mp_bvec_last_segment(const struct bio_vec *bvec,
unsigned total = bvec->bv_offset + bvec->bv_len;
unsigned last_page = (total - 1) / PAGE_SIZE;
- seg->bv_page = nth_page(bvec->bv_page, last_page);
+ seg->bv_page = bvec_nth_page(bvec->bv_page, last_page);
/* the whole segment is inside the last page */
if (bvec->bv_offset >= last_page * PAGE_SIZE) {
--
2.9.5
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/4] block: optimize __blk_segment_map_sg() for single-page bvec
2019-02-27 12:40 [PATCH 0/4] block: optimize for single-page bvec workloads Ming Lei
2019-02-27 12:40 ` [PATCH 1/4] block: introduce bvec_nth_page() Ming Lei
@ 2019-02-27 12:40 ` Ming Lei
2019-02-27 12:40 ` [PATCH 3/4] block: optimize blk_bio_segment_split " Ming Lei
` (2 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: Ming Lei @ 2019-02-27 12:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, Ming Lei
Introduce a fast path for single-page bvec IO, then blk_bvec_map_sg()
can be avoided.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
block/blk-merge.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index c7e8a8273460..c1ad8abbd9d6 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -447,7 +447,7 @@ static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio,
return biovec_phys_mergeable(q, &end_bv, &nxt_bv);
}
-static struct scatterlist *blk_next_sg(struct scatterlist **sg,
+static inline struct scatterlist *blk_next_sg(struct scatterlist **sg,
struct scatterlist *sglist)
{
if (!*sg)
@@ -512,7 +512,12 @@ __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec,
(*sg)->length += nbytes;
} else {
new_segment:
- (*nsegs) += blk_bvec_map_sg(q, bvec, sglist, sg);
+ if (bvec->bv_offset + bvec->bv_len <= PAGE_SIZE) {
+ *sg = blk_next_sg(sg, sglist);
+ sg_set_page(*sg, bvec->bv_page, nbytes, bvec->bv_offset);
+ (*nsegs) += 1;
+ } else
+ (*nsegs) += blk_bvec_map_sg(q, bvec, sglist, sg);
}
*bvprv = *bvec;
}
--
2.9.5
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 3/4] block: optimize blk_bio_segment_split for single-page bvec
2019-02-27 12:40 [PATCH 0/4] block: optimize for single-page bvec workloads Ming Lei
2019-02-27 12:40 ` [PATCH 1/4] block: introduce bvec_nth_page() Ming Lei
2019-02-27 12:40 ` [PATCH 2/4] block: optimize __blk_segment_map_sg() for single-page bvec Ming Lei
@ 2019-02-27 12:40 ` Ming Lei
2019-02-27 12:40 ` [PATCH 4/4] block: introduce mp_bvec_for_each_page() for iterating over page Ming Lei
2019-02-27 13:25 ` [PATCH 0/4] block: optimize for single-page bvec workloads Jens Axboe
4 siblings, 0 replies; 10+ messages in thread
From: Ming Lei @ 2019-02-27 12:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, Ming Lei
Introduce a fast path for single-page bvec IO, then we can avoid
to call bvec_split_segs() unnecessarily.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
block/blk-merge.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index c1ad8abbd9d6..9402a7c3ba22 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -286,10 +286,16 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
bvprv = bv;
bvprvp = &bvprv;
- if (bvec_split_segs(q, &bv, &nsegs, &seg_size,
- &front_seg_size, §ors))
+ if (bv.bv_offset + bv.bv_len <= PAGE_SIZE) {
+ nsegs++;
+ seg_size = bv.bv_len;
+ sectors += bv.bv_len >> 9;
+ if (nsegs == 1 && seg_size > front_seg_size)
+ front_seg_size = seg_size;
+ } else if (bvec_split_segs(q, &bv, &nsegs, &seg_size,
+ &front_seg_size, §ors)) {
goto split;
-
+ }
}
do_split = false;
--
2.9.5
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 4/4] block: introduce mp_bvec_for_each_page() for iterating over page
2019-02-27 12:40 [PATCH 0/4] block: optimize for single-page bvec workloads Ming Lei
` (2 preceding siblings ...)
2019-02-27 12:40 ` [PATCH 3/4] block: optimize blk_bio_segment_split " Ming Lei
@ 2019-02-27 12:40 ` Ming Lei
2019-02-28 14:13 ` Christoph Hellwig
2019-02-27 13:25 ` [PATCH 0/4] block: optimize for single-page bvec workloads Jens Axboe
4 siblings, 1 reply; 10+ messages in thread
From: Ming Lei @ 2019-02-27 12:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, Ming Lei
mp_bvec_for_each_segment() is a bit big for the iteration, so introduce
a light-weight helper for iterating over pages, then 32bytes stack
space can be saved.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
block/bio.c | 7 +++----
include/linux/bvec.h | 5 +++++
2 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index eae8b754801d..7917535123df 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -849,8 +849,7 @@ static int __bio_iov_bvec_add_pages(struct bio *bio, struct iov_iter *iter)
size = bio_add_page(bio, bv->bv_page, len,
bv->bv_offset + iter->iov_offset);
if (size == len) {
- struct bvec_iter_all iter_all;
- struct bio_vec *tmp;
+ struct page *pg;
int i;
/*
@@ -862,8 +861,8 @@ static int __bio_iov_bvec_add_pages(struct bio *bio, struct iov_iter *iter)
* get rid of the get here and the need to call
* bio_release_pages() at IO completion time.
*/
- mp_bvec_for_each_segment(tmp, bv, i, iter_all)
- get_page(tmp->bv_page);
+ mp_bvec_for_each_page(pg, bv, i)
+ get_page(pg);
iov_iter_advance(iter, size);
return 0;
}
diff --git a/include/linux/bvec.h b/include/linux/bvec.h
index 4376f683c08a..2c32e3e151a0 100644
--- a/include/linux/bvec.h
+++ b/include/linux/bvec.h
@@ -188,4 +188,9 @@ static inline void mp_bvec_last_segment(const struct bio_vec *bvec,
}
}
+#define mp_bvec_for_each_page(pg, bv, i) \
+ for (i = (bv)->bv_offset / PAGE_SIZE; \
+ (i < (((bv)->bv_offset + (bv)->bv_len) / PAGE_SIZE)) && \
+ (pg = bvec_nth_page((bv)->bv_page, i)); i += 1)
+
#endif /* __LINUX_BVEC_ITER_H */
--
2.9.5
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 0/4] block: optimize for single-page bvec workloads
2019-02-27 12:40 [PATCH 0/4] block: optimize for single-page bvec workloads Ming Lei
` (3 preceding siblings ...)
2019-02-27 12:40 ` [PATCH 4/4] block: introduce mp_bvec_for_each_page() for iterating over page Ming Lei
@ 2019-02-27 13:25 ` Jens Axboe
2019-02-27 15:41 ` Ming Lei
4 siblings, 1 reply; 10+ messages in thread
From: Jens Axboe @ 2019-02-27 13:25 UTC (permalink / raw)
To: Ming Lei; +Cc: linux-block
On 2/27/19 5:40 AM, Ming Lei wrote:
> Hi,
>
> The 1st patch introduce bvec_nth_page(), so that nth_page() can
> be avoided if the bvec is single-page.
>
> The 2nd and 3rd patch adds fast path for single-page bvec case.
>
> The last patch introduces a light-weight helper for iterating over
> pages, which may improve __bio_iov_bvec_add_pages(). This patch
> is for io_uring.
This reclaims another 2%, we're now at 1585K for the test case.
Definite improvement!
--
Jens Axboe
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/4] block: optimize for single-page bvec workloads
2019-02-27 13:25 ` [PATCH 0/4] block: optimize for single-page bvec workloads Jens Axboe
@ 2019-02-27 15:41 ` Ming Lei
2019-02-27 15:54 ` Jens Axboe
0 siblings, 1 reply; 10+ messages in thread
From: Ming Lei @ 2019-02-27 15:41 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block
On Wed, Feb 27, 2019 at 06:25:30AM -0700, Jens Axboe wrote:
> On 2/27/19 5:40 AM, Ming Lei wrote:
> > Hi,
> >
> > The 1st patch introduce bvec_nth_page(), so that nth_page() can
> > be avoided if the bvec is single-page.
> >
> > The 2nd and 3rd patch adds fast path for single-page bvec case.
> >
> > The last patch introduces a light-weight helper for iterating over
> > pages, which may improve __bio_iov_bvec_add_pages(). This patch
> > is for io_uring.
>
> This reclaims another 2%, we're now at 1585K for the test case.
> Definite improvement!
BTW, could you test the following patch against the 4 patches?
--
From e763e623a54a73858c1949b3ea957f9d97006150 Mon Sep 17 00:00:00 2001
From: Ming Lei <ming.lei@redhat.com>
Date: Wed, 27 Feb 2019 16:51:25 +0800
Subject: [PATCH] block: apply bio_for_each_page_all()
If the users just need to retrieve each page, use bio_for_each_page_all()
which is much more efficient than bio_for_each_segment_all().
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
block/bio.c | 29 +++++++++++++----------------
fs/block_dev.c | 24 ++++++++++++------------
fs/direct-io.c | 9 +++------
include/linux/bio.h | 5 +++++
4 files changed, 33 insertions(+), 34 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index 7917535123df..c416b99abef8 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1643,24 +1643,22 @@ struct bio *bio_copy_kern(struct request_queue *q, void *data, unsigned int len,
*/
void bio_set_pages_dirty(struct bio *bio)
{
- struct bio_vec *bvec;
- int i;
- struct bvec_iter_all iter_all;
+ struct page *pg;
+ unsigned i, j;
- bio_for_each_segment_all(bvec, bio, i, iter_all) {
- if (!PageCompound(bvec->bv_page))
- set_page_dirty_lock(bvec->bv_page);
+ bio_for_each_page_all(pg, bio, i, j) {
+ if (!PageCompound(pg))
+ set_page_dirty_lock(pg);
}
}
static void bio_release_pages(struct bio *bio)
{
- struct bio_vec *bvec;
- int i;
- struct bvec_iter_all iter_all;
+ struct page *pg;
+ unsigned i, j;
- bio_for_each_segment_all(bvec, bio, i, iter_all)
- put_page(bvec->bv_page);
+ bio_for_each_page_all(pg, bio, i, j)
+ put_page(pg);
}
/*
@@ -1703,13 +1701,12 @@ static void bio_dirty_fn(struct work_struct *work)
void bio_check_pages_dirty(struct bio *bio)
{
- struct bio_vec *bvec;
unsigned long flags;
- int i;
- struct bvec_iter_all iter_all;
+ unsigned i, j;
+ struct page *pg;
- bio_for_each_segment_all(bvec, bio, i, iter_all) {
- if (!PageDirty(bvec->bv_page) && !PageCompound(bvec->bv_page))
+ bio_for_each_page_all(pg, bio, i, j) {
+ if (!PageDirty(pg) && !PageCompound(pg))
goto defer;
}
diff --git a/fs/block_dev.c b/fs/block_dev.c
index e9faa52bb489..c6f90198c305 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -204,14 +204,15 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter,
{
struct file *file = iocb->ki_filp;
struct block_device *bdev = I_BDEV(bdev_file_inode(file));
- struct bio_vec inline_vecs[DIO_INLINE_BIO_VECS], *vecs, *bvec;
+ struct bio_vec inline_vecs[DIO_INLINE_BIO_VECS], *vecs;
loff_t pos = iocb->ki_pos;
bool should_dirty = false;
struct bio bio;
ssize_t ret;
blk_qc_t qc;
- int i;
- struct bvec_iter_all iter_all;
+ struct page *pg;
+ unsigned i, j;
+
if ((pos | iov_iter_alignment(iter)) &
(bdev_logical_block_size(bdev) - 1))
@@ -261,10 +262,10 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter,
}
__set_current_state(TASK_RUNNING);
- bio_for_each_segment_all(bvec, &bio, i, iter_all) {
- if (should_dirty && !PageCompound(bvec->bv_page))
- set_page_dirty_lock(bvec->bv_page);
- put_page(bvec->bv_page);
+ bio_for_each_page_all(pg, &bio, i, j) {
+ if (should_dirty && !PageCompound(pg))
+ set_page_dirty_lock(pg);
+ put_page(pg);
}
if (unlikely(bio.bi_status))
@@ -336,12 +337,11 @@ static void blkdev_bio_end_io(struct bio *bio)
if (should_dirty) {
bio_check_pages_dirty(bio);
} else {
- struct bio_vec *bvec;
- int i;
- struct bvec_iter_all iter_all;
+ struct page *pg;
+ unsigned i, j;
- bio_for_each_segment_all(bvec, bio, i, iter_all)
- put_page(bvec->bv_page);
+ bio_for_each_page_all(pg, bio, i, j)
+ put_page(pg);
bio_put(bio);
}
}
diff --git a/fs/direct-io.c b/fs/direct-io.c
index 9bb015bc4a83..94f56e6ca573 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -537,8 +537,6 @@ static struct bio *dio_await_one(struct dio *dio)
*/
static blk_status_t dio_bio_complete(struct dio *dio, struct bio *bio)
{
- struct bio_vec *bvec;
- unsigned i;
blk_status_t err = bio->bi_status;
if (err) {
@@ -551,11 +549,10 @@ static blk_status_t dio_bio_complete(struct dio *dio, struct bio *bio)
if (dio->is_async && dio->op == REQ_OP_READ && dio->should_dirty) {
bio_check_pages_dirty(bio); /* transfers ownership */
} else {
- struct bvec_iter_all iter_all;
-
- bio_for_each_segment_all(bvec, bio, i, iter_all) {
- struct page *page = bvec->bv_page;
+ struct page *page;
+ unsigned i, j;
+ bio_for_each_page_all(page, bio, i, j) {
if (dio->op == REQ_OP_READ && !PageCompound(page) &&
dio->should_dirty)
set_page_dirty_lock(page);
diff --git a/include/linux/bio.h b/include/linux/bio.h
index bb6090aa165d..d7ba07c5252d 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -134,6 +134,11 @@ static inline bool bio_full(struct bio *bio)
for (i = 0, iter_all.idx = 0; iter_all.idx < (bio)->bi_vcnt; iter_all.idx++) \
mp_bvec_for_each_segment(bvl, &((bio)->bi_io_vec[iter_all.idx]), i, iter_all)
+/* iterate over each single page in this bio */
+#define bio_for_each_page_all(pg, bio, i, j) \
+ for (i = 0; i < (bio)->bi_vcnt; i++) \
+ mp_bvec_for_each_page(pg, &((bio)->bi_io_vec[i]), j)
+
static inline void bio_advance_iter(struct bio *bio, struct bvec_iter *iter,
unsigned bytes)
{
--
2.9.5
--
Ming
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 0/4] block: optimize for single-page bvec workloads
2019-02-27 15:41 ` Ming Lei
@ 2019-02-27 15:54 ` Jens Axboe
0 siblings, 0 replies; 10+ messages in thread
From: Jens Axboe @ 2019-02-27 15:54 UTC (permalink / raw)
To: Ming Lei; +Cc: linux-block
On 2/27/19 8:41 AM, Ming Lei wrote:
> On Wed, Feb 27, 2019 at 06:25:30AM -0700, Jens Axboe wrote:
>> On 2/27/19 5:40 AM, Ming Lei wrote:
>>> Hi,
>>>
>>> The 1st patch introduce bvec_nth_page(), so that nth_page() can
>>> be avoided if the bvec is single-page.
>>>
>>> The 2nd and 3rd patch adds fast path for single-page bvec case.
>>>
>>> The last patch introduces a light-weight helper for iterating over
>>> pages, which may improve __bio_iov_bvec_add_pages(). This patch
>>> is for io_uring.
>>
>> This reclaims another 2%, we're now at 1585K for the test case.
>> Definite improvement!
>
> BTW, could you test the following patch against the 4 patches?
No noticable difference with this one, though it looks like an
improvement from a code inspection point of view.
--
Jens Axboe
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 4/4] block: introduce mp_bvec_for_each_page() for iterating over page
2019-02-27 12:40 ` [PATCH 4/4] block: introduce mp_bvec_for_each_page() for iterating over page Ming Lei
@ 2019-02-28 14:13 ` Christoph Hellwig
2019-02-28 15:10 ` Jens Axboe
0 siblings, 1 reply; 10+ messages in thread
From: Christoph Hellwig @ 2019-02-28 14:13 UTC (permalink / raw)
To: Ming Lei; +Cc: Jens Axboe, linux-block
On Wed, Feb 27, 2019 at 08:40:13PM +0800, Ming Lei wrote:
> mp_bvec_for_each_segment() is a bit big for the iteration, so introduce
> a light-weight helper for iterating over pages, then 32bytes stack
> space can be saved.
The version in Jens' tree seems to add this helper, but no actual
users..
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 4/4] block: introduce mp_bvec_for_each_page() for iterating over page
2019-02-28 14:13 ` Christoph Hellwig
@ 2019-02-28 15:10 ` Jens Axboe
0 siblings, 0 replies; 10+ messages in thread
From: Jens Axboe @ 2019-02-28 15:10 UTC (permalink / raw)
To: Christoph Hellwig, Ming Lei; +Cc: linux-block
On 2/28/19 7:13 AM, Christoph Hellwig wrote:
> On Wed, Feb 27, 2019 at 08:40:13PM +0800, Ming Lei wrote:
>> mp_bvec_for_each_segment() is a bit big for the iteration, so introduce
>> a light-weight helper for iterating over pages, then 32bytes stack
>> space can be saved.
>
> The version in Jens' tree seems to add this helper, but no actual
> users..
io_uring uses it, which is based on the block branch.
But hopefully that too can go away, if the iov/bio no-ref stuff is
reviewed + merged...
--
Jens Axboe
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2019-02-28 15:10 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-27 12:40 [PATCH 0/4] block: optimize for single-page bvec workloads Ming Lei
2019-02-27 12:40 ` [PATCH 1/4] block: introduce bvec_nth_page() Ming Lei
2019-02-27 12:40 ` [PATCH 2/4] block: optimize __blk_segment_map_sg() for single-page bvec Ming Lei
2019-02-27 12:40 ` [PATCH 3/4] block: optimize blk_bio_segment_split " Ming Lei
2019-02-27 12:40 ` [PATCH 4/4] block: introduce mp_bvec_for_each_page() for iterating over page Ming Lei
2019-02-28 14:13 ` Christoph Hellwig
2019-02-28 15:10 ` Jens Axboe
2019-02-27 13:25 ` [PATCH 0/4] block: optimize for single-page bvec workloads Jens Axboe
2019-02-27 15:41 ` Ming Lei
2019-02-27 15:54 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).