All of lore.kernel.org
 help / color / mirror / Atom feed
* block device direct I/O fast path
@ 2016-10-31 17:59 Christoph Hellwig
  2016-10-31 17:59 ` [PATCH 1/2] block: add bio_iov_iter_get_pages() Christoph Hellwig
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Christoph Hellwig @ 2016-10-31 17:59 UTC (permalink / raw)
  To: axboe; +Cc: linux-block, linux-kernel

Hi Jens,

this small series adds a fasth path to the block device direct I/O
code.  It uses new magic created by Kent to avoid allocating an array
for the pages, and as part of that allows small, non-aio direct I/O
requests to be done without memory allocations or atomic ops and with
a minimal cache footprint.  It's basically a cut down version of the
new iomap direct I/O code, and in the future it might also make sense
to move the main direct I/O code to a similar model.  But indepedent
of that it's always worth to optimize the case of small, non-I/O
requests as allocating the bio and biovec on stack and a trivial
completion handler will always win over a full blown implementation.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/2] block: add bio_iov_iter_get_pages()
  2016-10-31 17:59 block device direct I/O fast path Christoph Hellwig
@ 2016-10-31 17:59 ` Christoph Hellwig
  2016-11-01  1:05   ` Ming Lei
  2016-10-31 17:59 ` [PATCH 2/2] block: fast-path for small and simple direct I/O requests Christoph Hellwig
  2016-11-01 17:00 ` block device direct I/O fast path Jens Axboe
  2 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2016-10-31 17:59 UTC (permalink / raw)
  To: axboe; +Cc: linux-block, linux-kernel, Kent Overstreet

From: Kent Overstreet <kent.overstreet@gmail.com>

This is a helper that pins down a range from an iov_iter and adds it to
a bio without requiring a separate memory allocation for the page array.
It will be used for upcoming direct I/O implementations for block devices
and iomap based file systems.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
[hch: ported to the iov_iter interface, renamed and added comments.
      All blame should be directed to me and all fame should go to Kent
      after this!]
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c         | 49 +++++++++++++++++++++++++++++++++++++++++++++++++
 include/linux/bio.h |  1 +
 2 files changed, 50 insertions(+)

diff --git a/block/bio.c b/block/bio.c
index db85c57..2cf6eba 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -847,6 +847,55 @@ int bio_add_page(struct bio *bio, struct page *page,
 }
 EXPORT_SYMBOL(bio_add_page);
 
+/**
+ * bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio
+ * @bio: bio to add pages to
+ * @iter: iov iterator describing the region to be mapped
+ *
+ * Pins as many pages from *iter and appends them to @bio's bvec array. The
+ * pages will have to be released using put_page() when done.
+ */
+int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
+{
+	unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
+	struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
+	struct page **pages = (struct page **)bv;
+	size_t offset, diff;
+	ssize_t size;
+
+	size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
+	if (unlikely(size <= 0))
+		return size ? size : -EFAULT;
+	nr_pages = (size + offset + PAGE_SIZE - 1) / PAGE_SIZE;
+
+	/*
+	 * Deep magic below:  We need to walk the pinned pages backwards
+	 * because we are abusing the space allocated for the bio_vecs
+	 * for the page array.  Because the bio_vecs are larger than the
+	 * page pointers by definition this will always work.  But it also
+	 * means we can't use bio_add_page, so any changes to it's semantics
+	 * need to be reflected here as well.
+	 */
+	bio->bi_iter.bi_size += size;
+	bio->bi_vcnt += nr_pages;
+
+	diff = (nr_pages * PAGE_SIZE - offset) - size;
+	while (nr_pages--) {
+		bv[nr_pages].bv_page = pages[nr_pages];
+		bv[nr_pages].bv_len = PAGE_SIZE;
+		bv[nr_pages].bv_offset = 0;
+	}
+
+	bv[0].bv_offset += offset;
+	bv[0].bv_len -= offset;
+	if (diff)
+		bv[bio->bi_vcnt - 1].bv_len -= diff;
+
+	iov_iter_advance(iter, size);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(bio_iov_iter_get_pages);
+
 struct submit_bio_ret {
 	struct completion event;
 	int error;
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 87ce64d..c39fa0b 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -419,6 +419,7 @@ void bio_chain(struct bio *, struct bio *);
 extern int bio_add_page(struct bio *, struct page *, unsigned int,unsigned int);
 extern int bio_add_pc_page(struct request_queue *, struct bio *, struct page *,
 			   unsigned int, unsigned int);
+int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter);
 struct rq_map_data;
 extern struct bio *bio_map_user_iov(struct request_queue *,
 				    const struct iov_iter *, gfp_t);
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/2] block: fast-path for small and simple direct I/O requests
  2016-10-31 17:59 block device direct I/O fast path Christoph Hellwig
  2016-10-31 17:59 ` [PATCH 1/2] block: add bio_iov_iter_get_pages() Christoph Hellwig
@ 2016-10-31 17:59 ` Christoph Hellwig
  2016-10-31 23:19   ` Omar Sandoval
  2016-11-01 17:00 ` block device direct I/O fast path Jens Axboe
  2 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2016-10-31 17:59 UTC (permalink / raw)
  To: axboe; +Cc: linux-block, linux-kernel

This patch adds a small and simple fast patch for small direct I/O
requests on block devices that don't use AIO.  Between the neat
bio_iov_iter_get_pages helper that avoids allocating a page array
for get_user_pages and the on-stack bio and biovec this avoid memory
allocations and atomic operations entirely in the direct I/O code
(lower levels might still do memory allocations and will usually
have at least some atomic operations, though).

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/block_dev.c | 80 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 05b5533..d4134a3 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -30,6 +30,7 @@
 #include <linux/cleancache.h>
 #include <linux/dax.h>
 #include <linux/badblocks.h>
+#include <linux/task_io_accounting_ops.h>
 #include <linux/falloc.h>
 #include <asm/uaccess.h>
 #include "internal.h"
@@ -175,12 +176,91 @@ static struct inode *bdev_file_inode(struct file *file)
 	return file->f_mapping->host;
 }
 
+#define DIO_INLINE_BIO_VECS 4
+
+static void blkdev_bio_end_io_simple(struct bio *bio)
+{
+	struct task_struct *waiter = bio->bi_private;
+
+	WRITE_ONCE(bio->bi_private, NULL);
+	wake_up_process(waiter);
+}
+
+static ssize_t
+__blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter,
+		int nr_pages)
+{
+	struct file *file = iocb->ki_filp;
+	struct block_device *bdev = I_BDEV(bdev_file_inode(file));
+	unsigned blkbits = blksize_bits(bdev_logical_block_size(bdev));
+	struct bio_vec inline_vecs[DIO_INLINE_BIO_VECS], *bvec;
+	loff_t pos = iocb->ki_pos;
+	bool should_dirty = false;
+	struct bio bio;
+	ssize_t ret;
+	blk_qc_t qc;
+	int i;
+
+	if ((pos | iov_iter_alignment(iter)) & ((1 << blkbits) - 1))
+		return -EINVAL;
+
+	bio_init(&bio);
+	bio.bi_max_vecs = nr_pages;
+	bio.bi_io_vec = inline_vecs;
+	bio.bi_bdev = bdev;
+	bio.bi_iter.bi_sector = pos >> blkbits;
+	bio.bi_private = current;
+	bio.bi_end_io = blkdev_bio_end_io_simple;
+
+	ret = bio_iov_iter_get_pages(&bio, iter);
+	if (unlikely(ret))
+		return ret;
+	ret = bio.bi_iter.bi_size;
+
+	if (iov_iter_rw(iter) == READ) {
+		bio_set_op_attrs(&bio, REQ_OP_READ, 0);
+		if (iter->type == ITER_IOVEC)
+			should_dirty = true;
+	} else {
+		bio_set_op_attrs(&bio, REQ_OP_WRITE, WRITE_ODIRECT);
+		task_io_account_write(ret);
+	}
+
+	qc = submit_bio(&bio);
+	for (;;) {
+		set_current_state(TASK_UNINTERRUPTIBLE);
+		if (!READ_ONCE(bio.bi_private))
+			break;
+		if (!(iocb->ki_flags & IOCB_HIPRI) ||
+		    !blk_poll(bdev_get_queue(bdev), qc))
+			io_schedule();
+	}
+	__set_current_state(TASK_RUNNING);
+
+	bio_for_each_segment_all(bvec, &bio, i) {
+		if (should_dirty && !PageCompound(bvec->bv_page))
+			set_page_dirty_lock(bvec->bv_page);
+		put_page(bvec->bv_page);
+	}
+
+	if (unlikely(bio.bi_error))
+		return bio.bi_error;
+	iocb->ki_pos += ret;
+	return ret;
+}
+
 static ssize_t
 blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
 {
 	struct file *file = iocb->ki_filp;
 	struct inode *inode = bdev_file_inode(file);
+	int nr_pages;
 
+	nr_pages = iov_iter_npages(iter, BIO_MAX_PAGES);
+	if (!nr_pages)
+		return 0;
+	if (is_sync_kiocb(iocb) && nr_pages <= DIO_INLINE_BIO_VECS)
+		return __blkdev_direct_IO_simple(iocb, iter, nr_pages);
 	return __blockdev_direct_IO(iocb, inode, I_BDEV(inode), iter,
 				    blkdev_get_block, NULL, NULL,
 				    DIO_SKIP_DIO_COUNT);
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] block: fast-path for small and simple direct I/O requests
  2016-10-31 17:59 ` [PATCH 2/2] block: fast-path for small and simple direct I/O requests Christoph Hellwig
@ 2016-10-31 23:19   ` Omar Sandoval
  2016-11-01 14:18     ` Christoph Hellwig
  0 siblings, 1 reply; 11+ messages in thread
From: Omar Sandoval @ 2016-10-31 23:19 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: axboe, linux-block, linux-kernel

On Mon, Oct 31, 2016 at 11:59:25AM -0600, Christoph Hellwig wrote:
> This patch adds a small and simple fast patch for small direct I/O
> requests on block devices that don't use AIO.  Between the neat
> bio_iov_iter_get_pages helper that avoids allocating a page array
> for get_user_pages and the on-stack bio and biovec this avoid memory
> allocations and atomic operations entirely in the direct I/O code
> (lower levels might still do memory allocations and will usually
> have at least some atomic operations, though).
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  fs/block_dev.c | 80 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 80 insertions(+)
> 
> diff --git a/fs/block_dev.c b/fs/block_dev.c
> index 05b5533..d4134a3 100644
> --- a/fs/block_dev.c
> +++ b/fs/block_dev.c
> @@ -30,6 +30,7 @@
>  #include <linux/cleancache.h>
>  #include <linux/dax.h>
>  #include <linux/badblocks.h>
> +#include <linux/task_io_accounting_ops.h>
>  #include <linux/falloc.h>
>  #include <asm/uaccess.h>
>  #include "internal.h"
> @@ -175,12 +176,91 @@ static struct inode *bdev_file_inode(struct file *file)
>  	return file->f_mapping->host;
>  }
>  
> +#define DIO_INLINE_BIO_VECS 4
> +
> +static void blkdev_bio_end_io_simple(struct bio *bio)
> +{
> +	struct task_struct *waiter = bio->bi_private;
> +
> +	WRITE_ONCE(bio->bi_private, NULL);
> +	wake_up_process(waiter);
> +}
> +
> +static ssize_t
> +__blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter,
> +		int nr_pages)
> +{
> +	struct file *file = iocb->ki_filp;
> +	struct block_device *bdev = I_BDEV(bdev_file_inode(file));
> +	unsigned blkbits = blksize_bits(bdev_logical_block_size(bdev));
> +	struct bio_vec inline_vecs[DIO_INLINE_BIO_VECS], *bvec;
> +	loff_t pos = iocb->ki_pos;
> +	bool should_dirty = false;
> +	struct bio bio;
> +	ssize_t ret;
> +	blk_qc_t qc;
> +	int i;
> +
> +	if ((pos | iov_iter_alignment(iter)) & ((1 << blkbits) - 1))
> +		return -EINVAL;
> +
> +	bio_init(&bio);
> +	bio.bi_max_vecs = nr_pages;
> +	bio.bi_io_vec = inline_vecs;
> +	bio.bi_bdev = bdev;
> +	bio.bi_iter.bi_sector = pos >> blkbits;
> +	bio.bi_private = current;
> +	bio.bi_end_io = blkdev_bio_end_io_simple;
> +
> +	ret = bio_iov_iter_get_pages(&bio, iter);
> +	if (unlikely(ret))
> +		return ret;
> +	ret = bio.bi_iter.bi_size;
> +
> +	if (iov_iter_rw(iter) == READ) {
> +		bio_set_op_attrs(&bio, REQ_OP_READ, 0);
> +		if (iter->type == ITER_IOVEC)

Nit: iter_is_iovec()?

> +			should_dirty = true;
> +	} else {
> +		bio_set_op_attrs(&bio, REQ_OP_WRITE, WRITE_ODIRECT);
> +		task_io_account_write(ret);
> +	}
> +
> +	qc = submit_bio(&bio);
> +	for (;;) {
> +		set_current_state(TASK_UNINTERRUPTIBLE);
> +		if (!READ_ONCE(bio.bi_private))
> +			break;
> +		if (!(iocb->ki_flags & IOCB_HIPRI) ||
> +		    !blk_poll(bdev_get_queue(bdev), qc))
> +			io_schedule();
> +	}
> +	__set_current_state(TASK_RUNNING);
> +
> +	bio_for_each_segment_all(bvec, &bio, i) {
> +		if (should_dirty && !PageCompound(bvec->bv_page))
> +			set_page_dirty_lock(bvec->bv_page);
> +		put_page(bvec->bv_page);
> +	}
> +
> +	if (unlikely(bio.bi_error))
> +		return bio.bi_error;
> +	iocb->ki_pos += ret;
> +	return ret;
> +}
> +
>  static ssize_t
>  blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
>  {
>  	struct file *file = iocb->ki_filp;
>  	struct inode *inode = bdev_file_inode(file);
> +	int nr_pages;
>  
> +	nr_pages = iov_iter_npages(iter, BIO_MAX_PAGES);
> +	if (!nr_pages)
> +		return 0;
> +	if (is_sync_kiocb(iocb) && nr_pages <= DIO_INLINE_BIO_VECS)
> +		return __blkdev_direct_IO_simple(iocb, iter, nr_pages);
>  	return __blockdev_direct_IO(iocb, inode, I_BDEV(inode), iter,
>  				    blkdev_get_block, NULL, NULL,
>  				    DIO_SKIP_DIO_COUNT);

The bio_iov_iter_get_pages() trick is nifty. Jens had a similarly
stripped-down version of blkdev_direct_IO() for blk-mq as part of his
blk-dio branch. Looks like he just had the page array on the stack but
still had to allocate the bio.

-- 
Omar

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] block: add bio_iov_iter_get_pages()
  2016-10-31 17:59 ` [PATCH 1/2] block: add bio_iov_iter_get_pages() Christoph Hellwig
@ 2016-11-01  1:05   ` Ming Lei
  2016-11-01 14:19     ` Christoph Hellwig
  0 siblings, 1 reply; 11+ messages in thread
From: Ming Lei @ 2016-11-01  1:05 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, linux-block, Linux Kernel Mailing List, Kent Overstreet

On Tue, Nov 1, 2016 at 1:59 AM, Christoph Hellwig <hch@lst.de> wrote:
> From: Kent Overstreet <kent.overstreet@gmail.com>
>
> This is a helper that pins down a range from an iov_iter and adds it to
> a bio without requiring a separate memory allocation for the page array.
> It will be used for upcoming direct I/O implementations for block devices
> and iomap based file systems.
>
> Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
> [hch: ported to the iov_iter interface, renamed and added comments.
>       All blame should be directed to me and all fame should go to Kent
>       after this!]
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  block/bio.c         | 49 +++++++++++++++++++++++++++++++++++++++++++++++++
>  include/linux/bio.h |  1 +
>  2 files changed, 50 insertions(+)
>
> diff --git a/block/bio.c b/block/bio.c
> index db85c57..2cf6eba 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -847,6 +847,55 @@ int bio_add_page(struct bio *bio, struct page *page,
>  }
>  EXPORT_SYMBOL(bio_add_page);
>
> +/**
> + * bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio
> + * @bio: bio to add pages to
> + * @iter: iov iterator describing the region to be mapped
> + *
> + * Pins as many pages from *iter and appends them to @bio's bvec array. The
> + * pages will have to be released using put_page() when done.
> + */
> +int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
> +{
> +       unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
> +       struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
> +       struct page **pages = (struct page **)bv;
> +       size_t offset, diff;
> +       ssize_t size;
> +
> +       size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);

BTW, if there is one multi-page version of get_user_pages_fast() and
iov_iter_get_pages(), size of page array can be reduced too.

> +       if (unlikely(size <= 0))
> +               return size ? size : -EFAULT;
> +       nr_pages = (size + offset + PAGE_SIZE - 1) / PAGE_SIZE;
> +
> +       /*
> +        * Deep magic below:  We need to walk the pinned pages backwards
> +        * because we are abusing the space allocated for the bio_vecs
> +        * for the page array.  Because the bio_vecs are larger than the
> +        * page pointers by definition this will always work.  But it also
> +        * means we can't use bio_add_page, so any changes to it's semantics
> +        * need to be reflected here as well.
> +        */
> +       bio->bi_iter.bi_size += size;
> +       bio->bi_vcnt += nr_pages;
> +
> +       diff = (nr_pages * PAGE_SIZE - offset) - size;
> +       while (nr_pages--) {
> +               bv[nr_pages].bv_page = pages[nr_pages];
> +               bv[nr_pages].bv_len = PAGE_SIZE;
> +               bv[nr_pages].bv_offset = 0;
> +       }
> +
> +       bv[0].bv_offset += offset;
> +       bv[0].bv_len -= offset;
> +       if (diff)
> +               bv[bio->bi_vcnt - 1].bv_len -= diff;
> +
> +       iov_iter_advance(iter, size);
> +       return 0;
> +}
> +EXPORT_SYMBOL_GPL(bio_iov_iter_get_pages);
> +
>  struct submit_bio_ret {
>         struct completion event;
>         int error;
> diff --git a/include/linux/bio.h b/include/linux/bio.h
> index 87ce64d..c39fa0b 100644
> --- a/include/linux/bio.h
> +++ b/include/linux/bio.h
> @@ -419,6 +419,7 @@ void bio_chain(struct bio *, struct bio *);
>  extern int bio_add_page(struct bio *, struct page *, unsigned int,unsigned int);
>  extern int bio_add_pc_page(struct request_queue *, struct bio *, struct page *,
>                            unsigned int, unsigned int);
> +int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter);
>  struct rq_map_data;
>  extern struct bio *bio_map_user_iov(struct request_queue *,
>                                     const struct iov_iter *, gfp_t);
> --
> 2.1.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-block" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Ming Lei

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] block: fast-path for small and simple direct I/O requests
  2016-10-31 23:19   ` Omar Sandoval
@ 2016-11-01 14:18     ` Christoph Hellwig
  0 siblings, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2016-11-01 14:18 UTC (permalink / raw)
  To: Omar Sandoval; +Cc: Christoph Hellwig, axboe, linux-block, linux-kernel

On Mon, Oct 31, 2016 at 04:19:11PM -0700, Omar Sandoval wrote:
> > +	if (unlikely(ret))
> > +		return ret;
> > +	ret = bio.bi_iter.bi_size;
> > +
> > +	if (iov_iter_rw(iter) == READ) {
> > +		bio_set_op_attrs(&bio, REQ_OP_READ, 0);
> > +		if (iter->type == ITER_IOVEC)
> 
> Nit: iter_is_iovec()?

Fixed.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] block: add bio_iov_iter_get_pages()
  2016-11-01  1:05   ` Ming Lei
@ 2016-11-01 14:19     ` Christoph Hellwig
  0 siblings, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2016-11-01 14:19 UTC (permalink / raw)
  To: Ming Lei
  Cc: Christoph Hellwig, Jens Axboe, linux-block,
	Linux Kernel Mailing List, Kent Overstreet

On Tue, Nov 01, 2016 at 09:05:00AM +0800, Ming Lei wrote:
> > +       size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
> 
> BTW, if there is one multi-page version of get_user_pages_fast() and
> iov_iter_get_pages(), size of page array can be reduced too.

There isn't at the moment.  I have some plans for it, but it will require
lots of mm and arch changes, so I'd like to keep it separate.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: block device direct I/O fast path
  2016-10-31 17:59 block device direct I/O fast path Christoph Hellwig
  2016-10-31 17:59 ` [PATCH 1/2] block: add bio_iov_iter_get_pages() Christoph Hellwig
  2016-10-31 17:59 ` [PATCH 2/2] block: fast-path for small and simple direct I/O requests Christoph Hellwig
@ 2016-11-01 17:00 ` Jens Axboe
  2016-11-01 17:06   ` Christoph Hellwig
  2 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2016-11-01 17:00 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block, linux-kernel

On Mon, Oct 31 2016, Christoph Hellwig wrote:
> Hi Jens,
> 
> this small series adds a fasth path to the block device direct I/O
> code.  It uses new magic created by Kent to avoid allocating an array
> for the pages, and as part of that allows small, non-aio direct I/O
> requests to be done without memory allocations or atomic ops and with
> a minimal cache footprint.  It's basically a cut down version of the
> new iomap direct I/O code, and in the future it might also make sense
> to move the main direct I/O code to a similar model.  But indepedent
> of that it's always worth to optimize the case of small, non-I/O
> requests as allocating the bio and biovec on stack and a trivial
> completion handler will always win over a full blown implementation.

I'm not particularly tied to the request based implementation that I did
here:

http://git.kernel.dk/cgit/linux-block/log/?h=blk-dio

I basically wanted to solve two problems with that:

1) The slow old direct-io.c code
2) Implement the hybrid polling

#1 is accomplished with your code as well, though it does lack suppor
for async IO, but that would not be hard to add.

#2 is a bit more problematic, I'm pondering how we can implement that on
top of the bio approach. The nice thing about the request based approach
is that we have a 1:1 mapping with the unit on the driver side. And we
have a place to store the timer. I don't particularly love the embedded
timer, however, it'd be great to implement that differently. Trying to
think of options there, haven't found any yet.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: block device direct I/O fast path
  2016-11-01 17:00 ` block device direct I/O fast path Jens Axboe
@ 2016-11-01 17:06   ` Christoph Hellwig
  2016-11-01 17:54     ` Jens Axboe
  0 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2016-11-01 17:06 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Christoph Hellwig, linux-block, linux-kernel

On Tue, Nov 01, 2016 at 11:00:19AM -0600, Jens Axboe wrote:
> #2 is a bit more problematic, I'm pondering how we can implement that on
> top of the bio approach. The nice thing about the request based approach
> is that we have a 1:1 mapping with the unit on the driver side. And we
> have a place to store the timer. I don't particularly love the embedded
> timer, however, it'd be great to implement that differently. Trying to
> think of options there, haven't found any yet.

I have a couple ideas for that.  Give me a few weeks and I'll send
patches.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: block device direct I/O fast path
  2016-11-01 17:06   ` Christoph Hellwig
@ 2016-11-01 17:54     ` Jens Axboe
  2016-11-01 18:22       ` Jens Axboe
  0 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2016-11-01 17:54 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block, linux-kernel

On Tue, Nov 01 2016, Christoph Hellwig wrote:
> On Tue, Nov 01, 2016 at 11:00:19AM -0600, Jens Axboe wrote:
> > #2 is a bit more problematic, I'm pondering how we can implement that on
> > top of the bio approach. The nice thing about the request based approach
> > is that we have a 1:1 mapping with the unit on the driver side. And we
> > have a place to store the timer. I don't particularly love the embedded
> > timer, however, it'd be great to implement that differently. Trying to
> > think of options there, haven't found any yet.
> 
> I have a couple ideas for that.  Give me a few weeks and I'll send
> patches.

I'm not that patient :-)

For the SYNC part, it should be easy enough to do by just using an
on-stack hrtimer. I guess that will do for now, since we don't have
poll support for async O_DIRECT right now anyway.

http://git.kernel.dk/cgit/linux-block/commit/?h=for-4.10/dio&id=e96d9afd56791a61d463cb88f8f3b48393b71020

Untested, but should get the point across. I'll fire up a test box.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: block device direct I/O fast path
  2016-11-01 17:54     ` Jens Axboe
@ 2016-11-01 18:22       ` Jens Axboe
  0 siblings, 0 replies; 11+ messages in thread
From: Jens Axboe @ 2016-11-01 18:22 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block, linux-kernel

On 11/01/2016 11:54 AM, Jens Axboe wrote:
> On Tue, Nov 01 2016, Christoph Hellwig wrote:
>> On Tue, Nov 01, 2016 at 11:00:19AM -0600, Jens Axboe wrote:
>>> #2 is a bit more problematic, I'm pondering how we can implement that on
>>> top of the bio approach. The nice thing about the request based approach
>>> is that we have a 1:1 mapping with the unit on the driver side. And we
>>> have a place to store the timer. I don't particularly love the embedded
>>> timer, however, it'd be great to implement that differently. Trying to
>>> think of options there, haven't found any yet.
>>
>> I have a couple ideas for that.  Give me a few weeks and I'll send
>> patches.
>
> I'm not that patient :-)
>
> For the SYNC part, it should be easy enough to do by just using an
> on-stack hrtimer. I guess that will do for now, since we don't have
> poll support for async O_DIRECT right now anyway.
>
> http://git.kernel.dk/cgit/linux-block/commit/?h=for-4.10/dio&id=e96d9afd56791a61d463cb88f8f3b48393b71020
>
> Untested, but should get the point across. I'll fire up a test box.

http://git.kernel.dk/cgit/linux-block/commit/?h=for-4.10/dio&id=b879a97d749c5c1755d1e2f20c721eb6fde0c291

Now tested, inadvertently used the absolute timer instead of the
relative. Works for me.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2016-11-01 18:22 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-31 17:59 block device direct I/O fast path Christoph Hellwig
2016-10-31 17:59 ` [PATCH 1/2] block: add bio_iov_iter_get_pages() Christoph Hellwig
2016-11-01  1:05   ` Ming Lei
2016-11-01 14:19     ` Christoph Hellwig
2016-10-31 17:59 ` [PATCH 2/2] block: fast-path for small and simple direct I/O requests Christoph Hellwig
2016-10-31 23:19   ` Omar Sandoval
2016-11-01 14:18     ` Christoph Hellwig
2016-11-01 17:00 ` block device direct I/O fast path Jens Axboe
2016-11-01 17:06   ` Christoph Hellwig
2016-11-01 17:54     ` Jens Axboe
2016-11-01 18:22       ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.