linux-man.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHSET v8] io_uring IO interface
@ 2019-01-28 21:35 Jens Axboe
  2019-01-28 21:35 ` [PATCH 01/18] fs: add an iopoll method to struct file_operations Jens Axboe
                   ` (17 more replies)
  0 siblings, 18 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi

Here's v8 of the io_uring interface. Various little fixes all over
the map, and addressing various review concerns.

No new features, but the io_uring_enter(2) system call grew arguments
for a sigset_t so we can support poll properly.

For a general introduction to this patchset, see previous postings or
the LWN writeup here:

https://lwn.net/Articles/776703/

No new changes in the liburing user side library, but as a reference,
you can clone that here:

git://git.kernel.dk/liburing

We're still missing a man page for io_uring_enter(2), but the two other
system calls are documented.

Patches are against 5.0-rc4, and can also be found in my io_uring branch
here:

git://git.kernel.dk/linux-block io_uring

Changes since v7:
- Rebase on v5.0-rc4
- Add grace period control for SQ poll
- Add IORING_ENTER_SQ_WAKEUP instead of overloading 'to_submit'
- Address various minor review comments
- Use in_compat_syscall() instead of storing it in the ctx
- Remove now unneeded compat system call
- Ensure nops appropriately serialize the cq ring
- Add sigset_t support for wait side of io_uring_enter(2)
- Stop using page_frag_free()
- Remove duplicate include
- Make sure sq thread and application can't stomp on each other
- Add array_index_nospec() limiter for p->sq_thread_cpu

 Documentation/filesystems/vfs.txt      |    3 +
 arch/x86/entry/syscalls/syscall_32.tbl |    3 +
 arch/x86/entry/syscalls/syscall_64.tbl |    3 +
 block/bio.c                            |   59 +-
 fs/Makefile                            |    1 +
 fs/block_dev.c                         |   19 +-
 fs/file.c                              |   15 +-
 fs/file_table.c                        |    9 +-
 fs/gfs2/file.c                         |    2 +
 fs/io_uring.c                          | 2550 ++++++++++++++++++++++++
 fs/iomap.c                             |   48 +-
 fs/xfs/xfs_file.c                      |    1 +
 include/linux/bio.h                    |   14 +
 include/linux/blk_types.h              |    1 +
 include/linux/file.h                   |    2 +
 include/linux/fs.h                     |    6 +-
 include/linux/iomap.h                  |    1 +
 include/linux/sched/user.h             |    2 +-
 include/linux/syscalls.h               |    8 +
 include/uapi/asm-generic/unistd.h      |    8 +-
 include/uapi/linux/io_uring.h          |  143 ++
 init/Kconfig                           |    9 +
 kernel/sys_ni.c                        |    3 +
 23 files changed, 2869 insertions(+), 41 deletions(-)

-- 
Jens Axboe


--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* [PATCH 01/18] fs: add an iopoll method to struct file_operations
  2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
@ 2019-01-28 21:35 ` Jens Axboe
  2019-01-28 21:35 ` [PATCH 02/18] block: wire up block device iopoll method Jens Axboe
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi, Jens Axboe

From: Christoph Hellwig <hch@lst.de>

This new methods is used to explicitly poll for I/O completion for an
iocb.  It must be called for any iocb submitted asynchronously (that
is with a non-null ki_complete) which has the IOCB_HIPRI flag set.

The method is assisted by a new ki_cookie field in struct iocb to store
the polling cookie.

Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 Documentation/filesystems/vfs.txt | 3 +++
 include/linux/fs.h                | 2 ++
 2 files changed, 5 insertions(+)

diff --git a/Documentation/filesystems/vfs.txt b/Documentation/filesystems/vfs.txt
index 8dc8e9c2913f..761c6fd24a53 100644
--- a/Documentation/filesystems/vfs.txt
+++ b/Documentation/filesystems/vfs.txt
@@ -857,6 +857,7 @@ struct file_operations {
 	ssize_t (*write) (struct file *, const char __user *, size_t, loff_t *);
 	ssize_t (*read_iter) (struct kiocb *, struct iov_iter *);
 	ssize_t (*write_iter) (struct kiocb *, struct iov_iter *);
+	int (*iopoll)(struct kiocb *kiocb, bool spin);
 	int (*iterate) (struct file *, struct dir_context *);
 	int (*iterate_shared) (struct file *, struct dir_context *);
 	__poll_t (*poll) (struct file *, struct poll_table_struct *);
@@ -902,6 +903,8 @@ otherwise noted.
 
   write_iter: possibly asynchronous write with iov_iter as source
 
+  iopoll: called when aio wants to poll for completions on HIPRI iocbs
+
   iterate: called when the VFS needs to read the directory contents
 
   iterate_shared: called when the VFS needs to read the directory contents
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 811c77743dad..ccb0b7a63aa5 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -310,6 +310,7 @@ struct kiocb {
 	int			ki_flags;
 	u16			ki_hint;
 	u16			ki_ioprio; /* See linux/ioprio.h */
+	unsigned int		ki_cookie; /* for ->iopoll */
 } __randomize_layout;
 
 static inline bool is_sync_kiocb(struct kiocb *kiocb)
@@ -1786,6 +1787,7 @@ struct file_operations {
 	ssize_t (*write) (struct file *, const char __user *, size_t, loff_t *);
 	ssize_t (*read_iter) (struct kiocb *, struct iov_iter *);
 	ssize_t (*write_iter) (struct kiocb *, struct iov_iter *);
+	int (*iopoll)(struct kiocb *kiocb, bool spin);
 	int (*iterate) (struct file *, struct dir_context *);
 	int (*iterate_shared) (struct file *, struct dir_context *);
 	__poll_t (*poll) (struct file *, struct poll_table_struct *);
-- 
2.17.1

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 02/18] block: wire up block device iopoll method
  2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
  2019-01-28 21:35 ` [PATCH 01/18] fs: add an iopoll method to struct file_operations Jens Axboe
@ 2019-01-28 21:35 ` Jens Axboe
  2019-01-28 21:35 ` [PATCH 03/18] block: add bio_set_polled() helper Jens Axboe
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi, Jens Axboe

From: Christoph Hellwig <hch@lst.de>

Just call blk_poll on the iocb cookie, we can derive the block device
from the inode trivially.

Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/block_dev.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 58a4c1217fa8..f18d076a2596 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -293,6 +293,14 @@ struct blkdev_dio {
 
 static struct bio_set blkdev_dio_pool;
 
+static int blkdev_iopoll(struct kiocb *kiocb, bool wait)
+{
+	struct block_device *bdev = I_BDEV(kiocb->ki_filp->f_mapping->host);
+	struct request_queue *q = bdev_get_queue(bdev);
+
+	return blk_poll(q, READ_ONCE(kiocb->ki_cookie), wait);
+}
+
 static void blkdev_bio_end_io(struct bio *bio)
 {
 	struct blkdev_dio *dio = bio->bi_private;
@@ -410,6 +418,7 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
 				bio->bi_opf |= REQ_HIPRI;
 
 			qc = submit_bio(bio);
+			WRITE_ONCE(iocb->ki_cookie, qc);
 			break;
 		}
 
@@ -2076,6 +2085,7 @@ const struct file_operations def_blk_fops = {
 	.llseek		= block_llseek,
 	.read_iter	= blkdev_read_iter,
 	.write_iter	= blkdev_write_iter,
+	.iopoll		= blkdev_iopoll,
 	.mmap		= generic_file_mmap,
 	.fsync		= blkdev_fsync,
 	.unlocked_ioctl	= block_ioctl,
-- 
2.17.1

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 03/18] block: add bio_set_polled() helper
  2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
  2019-01-28 21:35 ` [PATCH 01/18] fs: add an iopoll method to struct file_operations Jens Axboe
  2019-01-28 21:35 ` [PATCH 02/18] block: wire up block device iopoll method Jens Axboe
@ 2019-01-28 21:35 ` Jens Axboe
  2019-01-28 21:35 ` [PATCH 04/18] iomap: wire up the iopoll method Jens Axboe
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi, Jens Axboe

For the upcoming async polled IO, we can't sleep allocating requests.
If we do, then we introduce a deadlock where the submitter already
has async polled IO in-flight, but can't wait for them to complete
since polled requests must be active found and reaped.

Utilize the helper in the blockdev DIRECT_IO code.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/block_dev.c      |  4 ++--
 include/linux/bio.h | 14 ++++++++++++++
 2 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index f18d076a2596..392e2bfb636f 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -247,7 +247,7 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter,
 		task_io_account_write(ret);
 	}
 	if (iocb->ki_flags & IOCB_HIPRI)
-		bio.bi_opf |= REQ_HIPRI;
+		bio_set_polled(&bio, iocb);
 
 	qc = submit_bio(&bio);
 	for (;;) {
@@ -415,7 +415,7 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
 		nr_pages = iov_iter_npages(iter, BIO_MAX_PAGES);
 		if (!nr_pages) {
 			if (iocb->ki_flags & IOCB_HIPRI)
-				bio->bi_opf |= REQ_HIPRI;
+				bio_set_polled(bio, iocb);
 
 			qc = submit_bio(bio);
 			WRITE_ONCE(iocb->ki_cookie, qc);
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 7380b094dcca..f6f0a2b3cbc8 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -823,5 +823,19 @@ static inline int bio_integrity_add_page(struct bio *bio, struct page *page,
 
 #endif /* CONFIG_BLK_DEV_INTEGRITY */
 
+/*
+ * Mark a bio as polled. Note that for async polled IO, the caller must
+ * expect -EWOULDBLOCK if we cannot allocate a request (or other resources).
+ * We cannot block waiting for requests on polled IO, as those completions
+ * must be found by the caller. This is different than IRQ driven IO, where
+ * it's safe to wait for IO to complete.
+ */
+static inline void bio_set_polled(struct bio *bio, struct kiocb *kiocb)
+{
+	bio->bi_opf |= REQ_HIPRI;
+	if (!is_sync_kiocb(kiocb))
+		bio->bi_opf |= REQ_NOWAIT;
+}
+
 #endif /* CONFIG_BLOCK */
 #endif /* __LINUX_BIO_H */
-- 
2.17.1

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 04/18] iomap: wire up the iopoll method
  2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
                   ` (2 preceding siblings ...)
  2019-01-28 21:35 ` [PATCH 03/18] block: add bio_set_polled() helper Jens Axboe
@ 2019-01-28 21:35 ` Jens Axboe
  2019-01-28 21:35 ` [PATCH 05/18] Add io_uring IO interface Jens Axboe
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi, Jens Axboe

From: Christoph Hellwig <hch@lst.de>

Store the request queue the last bio was submitted to in the iocb
private data in addition to the cookie so that we find the right block
device.  Also refactor the common direct I/O bio submission code into a
nice little helper.

Signed-off-by: Christoph Hellwig <hch@lst.de>

Modified to use bio_set_polled().

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/gfs2/file.c        |  2 ++
 fs/iomap.c            | 43 ++++++++++++++++++++++++++++---------------
 fs/xfs/xfs_file.c     |  1 +
 include/linux/iomap.h |  1 +
 4 files changed, 32 insertions(+), 15 deletions(-)

diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
index a2dea5bc0427..58a768e59712 100644
--- a/fs/gfs2/file.c
+++ b/fs/gfs2/file.c
@@ -1280,6 +1280,7 @@ const struct file_operations gfs2_file_fops = {
 	.llseek		= gfs2_llseek,
 	.read_iter	= gfs2_file_read_iter,
 	.write_iter	= gfs2_file_write_iter,
+	.iopoll		= iomap_dio_iopoll,
 	.unlocked_ioctl	= gfs2_ioctl,
 	.mmap		= gfs2_mmap,
 	.open		= gfs2_open,
@@ -1310,6 +1311,7 @@ const struct file_operations gfs2_file_fops_nolock = {
 	.llseek		= gfs2_llseek,
 	.read_iter	= gfs2_file_read_iter,
 	.write_iter	= gfs2_file_write_iter,
+	.iopoll		= iomap_dio_iopoll,
 	.unlocked_ioctl	= gfs2_ioctl,
 	.mmap		= gfs2_mmap,
 	.open		= gfs2_open,
diff --git a/fs/iomap.c b/fs/iomap.c
index a3088fae567b..4ee50b76b4a1 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -1454,6 +1454,28 @@ struct iomap_dio {
 	};
 };
 
+int iomap_dio_iopoll(struct kiocb *kiocb, bool spin)
+{
+	struct request_queue *q = READ_ONCE(kiocb->private);
+
+	if (!q)
+		return 0;
+	return blk_poll(q, READ_ONCE(kiocb->ki_cookie), spin);
+}
+EXPORT_SYMBOL_GPL(iomap_dio_iopoll);
+
+static void iomap_dio_submit_bio(struct iomap_dio *dio, struct iomap *iomap,
+		struct bio *bio)
+{
+	atomic_inc(&dio->ref);
+
+	if (dio->iocb->ki_flags & IOCB_HIPRI)
+		bio_set_polled(bio, dio->iocb);
+
+	dio->submit.last_queue = bdev_get_queue(iomap->bdev);
+	dio->submit.cookie = submit_bio(bio);
+}
+
 static ssize_t iomap_dio_complete(struct iomap_dio *dio)
 {
 	struct kiocb *iocb = dio->iocb;
@@ -1566,7 +1588,7 @@ static void iomap_dio_bio_end_io(struct bio *bio)
 	}
 }
 
-static blk_qc_t
+static void
 iomap_dio_zero(struct iomap_dio *dio, struct iomap *iomap, loff_t pos,
 		unsigned len)
 {
@@ -1580,15 +1602,10 @@ iomap_dio_zero(struct iomap_dio *dio, struct iomap *iomap, loff_t pos,
 	bio->bi_private = dio;
 	bio->bi_end_io = iomap_dio_bio_end_io;
 
-	if (dio->iocb->ki_flags & IOCB_HIPRI)
-		flags |= REQ_HIPRI;
-
 	get_page(page);
 	__bio_add_page(bio, page, len, 0);
 	bio_set_op_attrs(bio, REQ_OP_WRITE, flags);
-
-	atomic_inc(&dio->ref);
-	return submit_bio(bio);
+	iomap_dio_submit_bio(dio, iomap, bio);
 }
 
 static loff_t
@@ -1691,9 +1708,6 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length,
 				bio_set_pages_dirty(bio);
 		}
 
-		if (dio->iocb->ki_flags & IOCB_HIPRI)
-			bio->bi_opf |= REQ_HIPRI;
-
 		iov_iter_advance(dio->submit.iter, n);
 
 		dio->size += n;
@@ -1701,11 +1715,7 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length,
 		copied += n;
 
 		nr_pages = iov_iter_npages(&iter, BIO_MAX_PAGES);
-
-		atomic_inc(&dio->ref);
-
-		dio->submit.last_queue = bdev_get_queue(iomap->bdev);
-		dio->submit.cookie = submit_bio(bio);
+		iomap_dio_submit_bio(dio, iomap, bio);
 	} while (nr_pages);
 
 	/*
@@ -1916,6 +1926,9 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
 	if (dio->flags & IOMAP_DIO_WRITE_FUA)
 		dio->flags &= ~IOMAP_DIO_NEED_SYNC;
 
+	WRITE_ONCE(iocb->ki_cookie, dio->submit.cookie);
+	WRITE_ONCE(iocb->private, dio->submit.last_queue);
+
 	if (!atomic_dec_and_test(&dio->ref)) {
 		if (!dio->wait_for_completion)
 			return -EIOCBQUEUED;
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index e47425071e65..60c2da41f0fc 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -1203,6 +1203,7 @@ const struct file_operations xfs_file_operations = {
 	.write_iter	= xfs_file_write_iter,
 	.splice_read	= generic_file_splice_read,
 	.splice_write	= iter_file_splice_write,
+	.iopoll		= iomap_dio_iopoll,
 	.unlocked_ioctl	= xfs_file_ioctl,
 #ifdef CONFIG_COMPAT
 	.compat_ioctl	= xfs_file_compat_ioctl,
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index 9a4258154b25..0fefb5455bda 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -162,6 +162,7 @@ typedef int (iomap_dio_end_io_t)(struct kiocb *iocb, ssize_t ret,
 		unsigned flags);
 ssize_t iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
 		const struct iomap_ops *ops, iomap_dio_end_io_t end_io);
+int iomap_dio_iopoll(struct kiocb *kiocb, bool spin);
 
 #ifdef CONFIG_SWAP
 struct file;
-- 
2.17.1

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 05/18] Add io_uring IO interface
  2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
                   ` (3 preceding siblings ...)
  2019-01-28 21:35 ` [PATCH 04/18] iomap: wire up the iopoll method Jens Axboe
@ 2019-01-28 21:35 ` Jens Axboe
  2019-01-28 21:53   ` Jeff Moyer
                     ` (5 more replies)
  2019-01-28 21:35 ` [PATCH 06/18] io_uring: add fsync support Jens Axboe
                   ` (12 subsequent siblings)
  17 siblings, 6 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi, Jens Axboe

The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.

IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_sqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.

Two new system calls are added for this:

io_uring_setup(entries, params)
	Sets up a context for doing async IO. On success, returns a file
	descriptor that the application can mmap to gain access to the
	SQ ring, CQ ring, and io_uring_sqes.

io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
	Initiates IO against the rings mapped to this fd, or waits for
	them to complete, or both. The behavior is controlled by the
	parameters passed in. If 'to_submit' is non-zero, then we'll
	try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
	kernel will wait for 'min_complete' events, if they aren't
	already available. It's valid to set IORING_ENTER_GETEVENTS
	and 'min_complete' == 0 at the same time, this allows the
	kernel to return already completed events without waiting
	for them. This is useful only for polling, as for IRQ
	driven IO, the application can just check the CQ ring
	without entering the kernel.

With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.

For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.

Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.

Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 arch/x86/entry/syscalls/syscall_32.tbl |    2 +
 arch/x86/entry/syscalls/syscall_64.tbl |    2 +
 fs/Makefile                            |    1 +
 fs/io_uring.c                          | 1090 ++++++++++++++++++++++++
 include/linux/syscalls.h               |    6 +
 include/uapi/asm-generic/unistd.h      |    6 +-
 include/uapi/linux/io_uring.h          |   96 +++
 init/Kconfig                           |    9 +
 kernel/sys_ni.c                        |    2 +
 9 files changed, 1213 insertions(+), 1 deletion(-)
 create mode 100644 fs/io_uring.c
 create mode 100644 include/uapi/linux/io_uring.h

diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index 3cf7b533b3d1..481c126259e9 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -398,3 +398,5 @@
 384	i386	arch_prctl		sys_arch_prctl			__ia32_compat_sys_arch_prctl
 385	i386	io_pgetevents		sys_io_pgetevents		__ia32_compat_sys_io_pgetevents
 386	i386	rseq			sys_rseq			__ia32_sys_rseq
+425	i386	io_uring_setup		sys_io_uring_setup		__ia32_sys_io_uring_setup
+426	i386	io_uring_enter		sys_io_uring_enter		__ia32_sys_io_uring_enter
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index f0b1709a5ffb..6a32a430c8e0 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -343,6 +343,8 @@
 332	common	statx			__x64_sys_statx
 333	common	io_pgetevents		__x64_sys_io_pgetevents
 334	common	rseq			__x64_sys_rseq
+425	common	io_uring_setup		__x64_sys_io_uring_setup
+426	common	io_uring_enter		__x64_sys_io_uring_enter
 
 #
 # x32-specific system call numbers start at 512 to avoid cache impact
diff --git a/fs/Makefile b/fs/Makefile
index 293733f61594..8e15d6fc4340 100644
--- a/fs/Makefile
+++ b/fs/Makefile
@@ -30,6 +30,7 @@ obj-$(CONFIG_TIMERFD)		+= timerfd.o
 obj-$(CONFIG_EVENTFD)		+= eventfd.o
 obj-$(CONFIG_USERFAULTFD)	+= userfaultfd.o
 obj-$(CONFIG_AIO)               += aio.o
+obj-$(CONFIG_IO_URING)		+= io_uring.o
 obj-$(CONFIG_FS_DAX)		+= dax.o
 obj-$(CONFIG_FS_ENCRYPTION)	+= crypto/
 obj-$(CONFIG_FILE_LOCKING)      += locks.o
diff --git a/fs/io_uring.c b/fs/io_uring.c
new file mode 100644
index 000000000000..309eed3629d2
--- /dev/null
+++ b/fs/io_uring.c
@@ -0,0 +1,1090 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Shared application/kernel submission and completion ring pairs, for
+ * supporting fast/efficient IO.
+ *
+ * Copyright (C) 2018-2019 Jens Axboe
+ */
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/syscalls.h>
+#include <linux/compat.h>
+#include <linux/refcount.h>
+#include <linux/uio.h>
+
+#include <linux/sched/signal.h>
+#include <linux/fs.h>
+#include <linux/file.h>
+#include <linux/fdtable.h>
+#include <linux/mm.h>
+#include <linux/mman.h>
+#include <linux/mmu_context.h>
+#include <linux/percpu.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+#include <linux/blkdev.h>
+#include <linux/anon_inodes.h>
+#include <linux/sched/mm.h>
+
+#include <linux/uaccess.h>
+#include <linux/nospec.h>
+
+#include <uapi/linux/io_uring.h>
+
+#include "internal.h"
+
+struct io_uring {
+	u32 head ____cacheline_aligned_in_smp;
+	u32 tail ____cacheline_aligned_in_smp;
+};
+
+struct io_sq_ring {
+	struct io_uring		r;
+	u32			ring_mask;
+	u32			ring_entries;
+	u32			dropped;
+	u32			flags;
+	u32			array[];
+};
+
+struct io_cq_ring {
+	struct io_uring		r;
+	u32			ring_mask;
+	u32			ring_entries;
+	u32			overflow;
+	struct io_uring_cqe	cqes[];
+};
+
+struct io_ring_ctx {
+	struct {
+		struct percpu_ref	refs;
+	} ____cacheline_aligned_in_smp;
+
+	struct {
+		unsigned int		flags;
+
+		/* SQ ring */
+		struct io_sq_ring	*sq_ring;
+		unsigned		cached_sq_head;
+		unsigned		sq_entries;
+		unsigned		sq_mask;
+		unsigned		sq_thread_cpu;
+		struct io_uring_sqe	*sq_sqes;
+	} ____cacheline_aligned_in_smp;
+
+	/* IO offload */
+	struct workqueue_struct	*sqo_wq;
+	struct mm_struct	*sqo_mm;
+	struct files_struct	*sqo_files;
+
+	struct {
+		/* CQ ring */
+		struct io_cq_ring	*cq_ring;
+		unsigned		cached_cq_tail;
+		unsigned		cq_entries;
+		unsigned		cq_mask;
+		struct wait_queue_head	cq_wait;
+		struct fasync_struct	*cq_fasync;
+	} ____cacheline_aligned_in_smp;
+
+	struct user_struct	*user;
+
+	struct completion	ctx_done;
+
+	struct {
+		struct mutex		uring_lock;
+		wait_queue_head_t	wait;
+	} ____cacheline_aligned_in_smp;
+
+	struct {
+		spinlock_t		completion_lock;
+	} ____cacheline_aligned_in_smp;
+};
+
+struct sqe_submit {
+	const struct io_uring_sqe	*sqe;
+	unsigned			index;
+};
+
+struct io_kiocb {
+	union {
+		struct kiocb		rw;
+		struct sqe_submit	submit;
+	};
+
+	struct io_ring_ctx	*ctx;
+	struct list_head	list;
+	unsigned int		flags;
+#define REQ_F_FORCE_NONBLOCK	1	/* inline submission attempt */
+	u64			user_data;
+
+	struct work_struct	work;
+};
+
+#define IO_PLUG_THRESHOLD		2
+
+static struct kmem_cache *req_cachep;
+
+static const struct file_operations io_uring_fops;
+
+static void io_ring_ctx_ref_free(struct percpu_ref *ref)
+{
+	struct io_ring_ctx *ctx = container_of(ref, struct io_ring_ctx, refs);
+
+	complete(&ctx->ctx_done);
+}
+
+static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
+{
+	struct io_ring_ctx *ctx;
+
+	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+	if (!ctx)
+		return NULL;
+
+	if (percpu_ref_init(&ctx->refs, io_ring_ctx_ref_free, 0, GFP_KERNEL)) {
+		kfree(ctx);
+		return NULL;
+	}
+
+	ctx->flags = p->flags;
+	init_waitqueue_head(&ctx->cq_wait);
+	init_completion(&ctx->ctx_done);
+	mutex_init(&ctx->uring_lock);
+	init_waitqueue_head(&ctx->wait);
+	spin_lock_init(&ctx->completion_lock);
+	return ctx;
+}
+
+static void io_commit_cqring(struct io_ring_ctx *ctx)
+{
+	struct io_cq_ring *ring = ctx->cq_ring;
+
+	if (ctx->cached_cq_tail != ring->r.tail) {
+		/* order cqe stores with ring update */
+		smp_wmb();
+		ring->r.tail = ctx->cached_cq_tail;
+		/* write side barrier of tail update, app has read side */
+		smp_wmb();
+
+		if (wq_has_sleeper(&ctx->cq_wait)) {
+			wake_up_interruptible(&ctx->cq_wait);
+			kill_fasync(&ctx->cq_fasync, SIGIO, POLL_IN);
+		}
+	}
+}
+
+static struct io_uring_cqe *io_get_cqring(struct io_ring_ctx *ctx)
+{
+	struct io_cq_ring *ring = ctx->cq_ring;
+	unsigned tail;
+
+	tail = ctx->cached_cq_tail;
+	smp_rmb();
+	if (tail + 1 == READ_ONCE(ring->r.head))
+		return NULL;
+
+	ctx->cached_cq_tail++;
+	return &ring->cqes[tail & ctx->cq_mask];
+}
+
+static void __io_cqring_add_event(struct io_ring_ctx *ctx, u64 ki_user_data,
+				  long res, unsigned ev_flags)
+{
+	struct io_uring_cqe *cqe;
+
+	/*
+	 * If we can't get a cq entry, userspace overflowed the
+	 * submission (by quite a lot). Increment the overflow count in
+	 * the ring.
+	 */
+	cqe = io_get_cqring(ctx);
+	if (cqe) {
+		cqe->user_data = ki_user_data;
+		cqe->res = res;
+		cqe->flags = ev_flags;
+		io_commit_cqring(ctx);
+	} else
+		ctx->cq_ring->overflow++;
+
+	if (waitqueue_active(&ctx->wait))
+		wake_up(&ctx->wait);
+}
+
+static void io_cqring_add_event(struct io_ring_ctx *ctx, u64 ki_user_data,
+				long res, unsigned ev_flags)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&ctx->completion_lock, flags);
+	__io_cqring_add_event(ctx, ki_user_data, res, ev_flags);
+	spin_unlock_irqrestore(&ctx->completion_lock, flags);
+}
+
+static void io_ring_drop_ctx_refs(struct io_ring_ctx *ctx, unsigned refs)
+{
+	percpu_ref_put_many(&ctx->refs, refs);
+
+	if (waitqueue_active(&ctx->wait))
+		wake_up(&ctx->wait);
+}
+
+static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx)
+{
+	struct io_kiocb *req;
+
+	/* safe to use the non tryget, as we're inside ring ref already */
+	percpu_ref_get(&ctx->refs);
+
+	req = kmem_cache_alloc(req_cachep, GFP_ATOMIC | __GFP_NOWARN);
+	if (req) {
+		req->ctx = ctx;
+		req->flags = 0;
+		return req;
+	}
+
+	io_ring_drop_ctx_refs(ctx, 1);
+	return NULL;
+}
+
+static void io_free_req(struct io_kiocb *req)
+{
+	io_ring_drop_ctx_refs(req->ctx, 1);
+	kmem_cache_free(req_cachep, req);
+}
+
+static void kiocb_end_write(struct kiocb *kiocb)
+{
+	if (kiocb->ki_flags & IOCB_WRITE) {
+		struct inode *inode = file_inode(kiocb->ki_filp);
+
+		/*
+		 * Tell lockdep we inherited freeze protection from submission
+		 * thread.
+		 */
+		if (S_ISREG(inode->i_mode))
+			__sb_writers_acquired(inode->i_sb, SB_FREEZE_WRITE);
+		file_end_write(kiocb->ki_filp);
+	}
+}
+
+static void io_complete_rw(struct kiocb *kiocb, long res, long res2)
+{
+	struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw);
+
+	kiocb_end_write(kiocb);
+
+	fput(kiocb->ki_filp);
+	io_cqring_add_event(req->ctx, req->user_data, res, 0);
+	io_free_req(req);
+}
+
+static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+		      bool force_nonblock)
+{
+	struct kiocb *kiocb = &req->rw;
+	int ret;
+
+	kiocb->ki_filp = fget(sqe->fd);
+	if (unlikely(!kiocb->ki_filp))
+		return -EBADF;
+	kiocb->ki_pos = sqe->off;
+	kiocb->ki_flags = iocb_flags(kiocb->ki_filp);
+	kiocb->ki_hint = ki_hint_validate(file_write_hint(kiocb->ki_filp));
+	if (sqe->ioprio) {
+		ret = ioprio_check_cap(sqe->ioprio);
+		if (ret)
+			goto out_fput;
+
+		kiocb->ki_ioprio = sqe->ioprio;
+	} else
+		kiocb->ki_ioprio = get_current_ioprio();
+
+	ret = kiocb_set_rw_flags(kiocb, sqe->rw_flags);
+	if (unlikely(ret))
+		goto out_fput;
+	if (force_nonblock) {
+		kiocb->ki_flags |= IOCB_NOWAIT;
+		req->flags |= REQ_F_FORCE_NONBLOCK;
+	}
+	if (kiocb->ki_flags & IOCB_HIPRI) {
+		ret = -EINVAL;
+		goto out_fput;
+	}
+
+	kiocb->ki_complete = io_complete_rw;
+	return 0;
+out_fput:
+	fput(kiocb->ki_filp);
+	return ret;
+}
+
+static inline void io_rw_done(struct kiocb *kiocb, ssize_t ret)
+{
+	switch (ret) {
+	case -EIOCBQUEUED:
+		break;
+	case -ERESTARTSYS:
+	case -ERESTARTNOINTR:
+	case -ERESTARTNOHAND:
+	case -ERESTART_RESTARTBLOCK:
+		/*
+		 * We can't just restart the syscall, since previously
+		 * submitted sqes may already be in progress. Just fail this
+		 * IO with EINTR.
+		 */
+		ret = -EINTR;
+		/* fall through */
+	default:
+		kiocb->ki_complete(kiocb, ret, 0);
+	}
+}
+
+static int io_import_iovec(struct io_ring_ctx *ctx, int rw,
+			   const struct io_uring_sqe *sqe,
+			   struct iovec **iovec, struct iov_iter *iter)
+{
+	void __user *buf = u64_to_user_ptr(sqe->addr);
+
+#ifdef CONFIG_COMPAT
+	if (in_compat_syscall())
+		return compat_import_iovec(rw, buf, sqe->len, UIO_FASTIOV,
+						iovec, iter);
+#endif
+
+	return import_iovec(rw, buf, sqe->len, UIO_FASTIOV, iovec, iter);
+}
+
+static ssize_t io_read(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+		       bool force_nonblock)
+{
+	struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
+	struct kiocb *kiocb = &req->rw;
+	struct iov_iter iter;
+	struct file *file;
+	ssize_t ret;
+
+	ret = io_prep_rw(req, sqe, force_nonblock);
+	if (ret)
+		return ret;
+	file = kiocb->ki_filp;
+
+	ret = -EBADF;
+	if (unlikely(!(file->f_mode & FMODE_READ)))
+		goto out_fput;
+	ret = -EINVAL;
+	if (unlikely(!file->f_op->read_iter))
+		goto out_fput;
+
+	ret = io_import_iovec(req->ctx, READ, sqe, &iovec, &iter);
+	if (ret)
+		goto out_fput;
+
+	ret = rw_verify_area(READ, file, &kiocb->ki_pos, iov_iter_count(&iter));
+	if (!ret) {
+		ssize_t ret2;
+
+		/* Catch -EAGAIN return for forced non-blocking submission */
+		ret2 = call_read_iter(file, kiocb, &iter);
+		if (!force_nonblock || ret2 != -EAGAIN)
+			io_rw_done(kiocb, ret2);
+		else
+			ret = -EAGAIN;
+	}
+	kfree(iovec);
+out_fput:
+	if (unlikely(ret))
+		fput(file);
+	return ret;
+}
+
+static ssize_t io_write(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+			bool force_nonblock)
+{
+	struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
+	struct kiocb *kiocb = &req->rw;
+	struct iov_iter iter;
+	struct file *file;
+	ssize_t ret;
+
+	ret = io_prep_rw(req, sqe, force_nonblock);
+	if (ret)
+		return ret;
+	file = kiocb->ki_filp;
+
+	ret = -EAGAIN;
+	if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT))
+		goto out_fput;
+
+	ret = -EBADF;
+	if (unlikely(!(file->f_mode & FMODE_WRITE)))
+		goto out_fput;
+	ret = -EINVAL;
+	if (unlikely(!file->f_op->write_iter))
+		goto out_fput;
+
+	ret = io_import_iovec(req->ctx, WRITE, sqe, &iovec, &iter);
+	if (ret)
+		goto out_fput;
+
+	ret = rw_verify_area(WRITE, file, &kiocb->ki_pos,
+				iov_iter_count(&iter));
+	if (!ret) {
+		/*
+		 * Open-code file_start_write here to grab freeze protection,
+		 * which will be released by another thread in
+		 * io_complete_rw().  Fool lockdep by telling it the lock got
+		 * released so that it doesn't complain about the held lock when
+		 * we return to userspace.
+		 */
+		if (S_ISREG(file_inode(file)->i_mode)) {
+			__sb_start_write(file_inode(file)->i_sb,
+						SB_FREEZE_WRITE, true);
+			__sb_writers_release(file_inode(file)->i_sb,
+						SB_FREEZE_WRITE);
+		}
+		kiocb->ki_flags |= IOCB_WRITE;
+		io_rw_done(kiocb, call_write_iter(file, kiocb, &iter));
+	}
+	kfree(iovec);
+out_fput:
+	if (unlikely(ret))
+		fput(file);
+	return ret;
+}
+
+/*
+ * IORING_OP_NOP just posts a completion event, nothing else.
+ */
+static int io_nop(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+{
+	struct io_ring_ctx *ctx = req->ctx;
+
+	io_cqring_add_event(ctx, sqe->user_data, 0, 0);
+	io_free_req(req);
+	return 0;
+}
+
+static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
+			   struct sqe_submit *s, bool force_nonblock)
+{
+	const struct io_uring_sqe *sqe = s->sqe;
+	ssize_t ret;
+
+	if (unlikely(s->index >= ctx->sq_entries))
+		return -EINVAL;
+	req->user_data = sqe->user_data;
+
+	ret = -EINVAL;
+	switch (sqe->opcode) {
+	case IORING_OP_NOP:
+		ret = io_nop(req, sqe);
+		break;
+	case IORING_OP_READV:
+		ret = io_read(req, sqe, force_nonblock);
+		break;
+	case IORING_OP_WRITEV:
+		ret = io_write(req, sqe, force_nonblock);
+		break;
+	default:
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+static void io_sq_wq_submit_work(struct work_struct *work)
+{
+	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+	struct sqe_submit *s = &req->submit;
+	u64 user_data = s->sqe->user_data;
+	struct io_ring_ctx *ctx = req->ctx;
+	mm_segment_t old_fs = get_fs();
+	struct files_struct *old_files;
+	int ret;
+
+	 /* Ensure we clear previously set forced non-block flag */
+	req->flags &= ~REQ_F_FORCE_NONBLOCK;
+
+	old_files = current->files;
+	current->files = ctx->sqo_files;
+
+	if (!mmget_not_zero(ctx->sqo_mm)) {
+		ret = -EFAULT;
+		goto err;
+	}
+
+	use_mm(ctx->sqo_mm);
+	set_fs(USER_DS);
+
+	ret = __io_submit_sqe(ctx, req, s, false);
+
+	set_fs(old_fs);
+	unuse_mm(ctx->sqo_mm);
+	mmput(ctx->sqo_mm);
+err:
+	if (ret) {
+		io_cqring_add_event(ctx, user_data, ret, 0);
+		io_free_req(req);
+	}
+	current->files = old_files;
+}
+
+static int io_submit_sqe(struct io_ring_ctx *ctx, struct sqe_submit *s)
+{
+	struct io_kiocb *req;
+	ssize_t ret;
+
+	/* enforce forwards compatibility on users */
+	if (unlikely(s->sqe->flags))
+		return -EINVAL;
+
+	req = io_get_req(ctx);
+	if (unlikely(!req))
+		return -EAGAIN;
+
+	ret = __io_submit_sqe(ctx, req, s, true);
+	if (ret == -EAGAIN) {
+		memcpy(&req->submit, s, sizeof(*s));
+		INIT_WORK(&req->work, io_sq_wq_submit_work);
+		queue_work(ctx->sqo_wq, &req->work);
+		ret = 0;
+	}
+	if (ret)
+		io_free_req(req);
+
+	return ret;
+}
+
+static void io_commit_sqring(struct io_ring_ctx *ctx)
+{
+	struct io_sq_ring *ring = ctx->sq_ring;
+
+	if (ctx->cached_sq_head != ring->r.head) {
+		ring->r.head = ctx->cached_sq_head;
+		/* write side barrier of head update, app has read side */
+		smp_wmb();
+	}
+}
+
+/*
+ * Undo last io_get_sqring()
+ */
+static void io_drop_sqring(struct io_ring_ctx *ctx)
+{
+	ctx->cached_sq_head--;
+}
+
+static bool io_get_sqring(struct io_ring_ctx *ctx, struct sqe_submit *s)
+{
+	struct io_sq_ring *ring = ctx->sq_ring;
+	unsigned head;
+
+	/*
+	 * The cached sq head (or cq tail) serves two purposes:
+	 *
+	 * 1) allows us to batch the cost of updating the user visible
+	 *    head updates.
+	 * 2) allows the kernel side to track the head on its own, even
+	 *    though the application is the one updating it.
+	 */
+	head = ctx->cached_sq_head;
+	smp_rmb();
+	if (head == READ_ONCE(ring->r.tail))
+		return false;
+
+	head = ring->array[head & ctx->sq_mask];
+	if (head < ctx->sq_entries) {
+		s->index = head;
+		s->sqe = &ctx->sq_sqes[head];
+		ctx->cached_sq_head++;
+		return true;
+	}
+
+	/* drop invalid entries */
+	ctx->cached_sq_head++;
+	ring->dropped++;
+	smp_wmb();
+	return false;
+}
+
+static int io_ring_submit(struct io_ring_ctx *ctx, unsigned int to_submit)
+{
+	int i, ret = 0, submit = 0;
+	struct blk_plug plug;
+
+	if (to_submit > IO_PLUG_THRESHOLD)
+		blk_start_plug(&plug);
+
+	for (i = 0; i < to_submit; i++) {
+		struct sqe_submit s;
+
+		if (!io_get_sqring(ctx, &s))
+			break;
+
+		ret = io_submit_sqe(ctx, &s);
+		if (ret) {
+			io_drop_sqring(ctx);
+			break;
+		}
+
+		submit++;
+	}
+	io_commit_sqring(ctx);
+
+	if (to_submit > IO_PLUG_THRESHOLD)
+		blk_finish_plug(&plug);
+
+	return submit ? submit : ret;
+}
+
+/*
+ * Wait until events become available, if we don't already have some. The
+ * application must reap them itself, as they reside on the shared cq ring.
+ */
+static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+			  const sigset_t __user *sig, size_t sigsz)
+{
+	struct io_cq_ring *ring = ctx->cq_ring;
+	sigset_t ksigmask, sigsaved;
+	DEFINE_WAIT(wait);
+	int ret = 0;
+
+	smp_rmb();
+	if (ring->r.head != ring->r.tail)
+		return 0;
+	if (!min_events)
+		return 0;
+
+	if (sig) {
+		ret = set_user_sigmask(sig, &ksigmask, &sigsaved, sigsz);
+		if (ret)
+			return ret;
+	}
+
+	do {
+		prepare_to_wait(&ctx->wait, &wait, TASK_INTERRUPTIBLE);
+
+		ret = 0;
+		smp_rmb();
+		if (ring->r.head != ring->r.tail)
+			break;
+
+		schedule();
+
+		ret = -EINTR;
+		if (signal_pending(current))
+			break;
+	} while (1);
+
+	finish_wait(&ctx->wait, &wait);
+
+	if (sig)
+		restore_user_sigmask(sig, &sigsaved);
+
+	return ring->r.head == ring->r.tail ? ret : 0;
+}
+
+static int __io_uring_enter(struct io_ring_ctx *ctx, unsigned to_submit,
+			    unsigned min_complete, unsigned flags,
+			    const sigset_t __user *sig, size_t sigsz)
+{
+	int submitted, ret;
+
+	submitted = ret = 0;
+	if (to_submit) {
+		submitted = io_ring_submit(ctx, to_submit);
+		if (submitted < 0)
+			return submitted;
+	}
+	if (flags & IORING_ENTER_GETEVENTS) {
+		/*
+		 * The application could have included the 'to_submit' count
+		 * in how many events it wanted to wait for. If we failed to
+		 * submit the desired count, we may need to adjust the number
+		 * of events to poll/wait for.
+		 */
+		if (submitted < to_submit)
+			min_complete = min_t(unsigned, submitted, min_complete);
+
+		ret = io_cqring_wait(ctx, min_complete, sig, sigsz);
+	}
+
+	return submitted ? submitted : ret;
+}
+
+static int io_sq_offload_start(struct io_ring_ctx *ctx)
+{
+	int ret;
+
+	ctx->sqo_mm = current->mm;
+
+	/*
+	 * This is safe since 'current' has the fd installed, and if that gets
+	 * closed on exit, then fops->release() is invoked which waits for the
+	 * async contexts to flush and exit before exiting.
+	 */
+	ret = -EBADF;
+	ctx->sqo_files = current->files;
+	if (!ctx->sqo_files)
+		goto err;
+
+	/* Do QD, or 2 * CPUS, whatever is smallest */
+	ctx->sqo_wq = alloc_workqueue("io_ring-wq", WQ_UNBOUND | WQ_FREEZABLE,
+			min(ctx->sq_entries - 1, 2 * num_online_cpus()));
+	if (!ctx->sqo_wq) {
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	return 0;
+err:
+	if (ctx->sqo_files)
+		ctx->sqo_files = NULL;
+	ctx->sqo_mm = NULL;
+	return ret;
+}
+
+static void __io_unaccount_mem(struct user_struct *user, unsigned long nr_pages)
+{
+	atomic_long_sub(nr_pages, &user->locked_vm);
+}
+
+static void io_unaccount_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
+{
+	if (ctx->user)
+		__io_unaccount_mem(ctx->user, nr_pages);
+}
+
+static int __io_account_mem(struct user_struct *user, unsigned long nr_pages)
+{
+	unsigned long page_limit, cur_pages, new_pages;
+
+	/* Don't allow more pages than we can safely lock */
+	page_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+
+	do {
+		cur_pages = atomic_long_read(&user->locked_vm);
+		new_pages = cur_pages + nr_pages;
+		if (new_pages > page_limit)
+			return -ENOMEM;
+	} while (atomic_long_cmpxchg(&user->locked_vm, cur_pages,
+					new_pages) != cur_pages);
+
+	return 0;
+}
+
+static void io_mem_free(void *ptr)
+{
+	struct page *page = virt_to_head_page(ptr);
+
+	if (put_page_testzero(page))
+		free_compound_page(page);
+}
+
+static void *io_mem_alloc(size_t size)
+{
+	gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO | __GFP_NOWARN | __GFP_COMP |
+				__GFP_NORETRY;
+
+	return (void *) __get_free_pages(gfp_flags, get_order(size));
+}
+
+static unsigned long ring_pages(unsigned sq_entries, unsigned cq_entries)
+{
+	struct io_sq_ring *sq_ring;
+	struct io_cq_ring *cq_ring;
+	size_t bytes;
+
+	bytes = struct_size(sq_ring, array, sq_entries);
+	bytes += array_size(sizeof(struct io_uring_sqe), sq_entries);
+	bytes += struct_size(cq_ring, cqes, cq_entries);
+
+	return (bytes + PAGE_SIZE - 1) / PAGE_SIZE;
+}
+
+static void io_ring_ctx_free(struct io_ring_ctx *ctx)
+{
+	destroy_workqueue(ctx->sqo_wq);
+
+	io_mem_free(ctx->sq_ring);
+	io_mem_free(ctx->sq_sqes);
+	io_mem_free(ctx->cq_ring);
+
+	percpu_ref_exit(&ctx->refs);
+	io_unaccount_mem(ctx, ring_pages(ctx->sq_entries, ctx->cq_entries));
+	kfree(ctx);
+}
+
+static __poll_t io_uring_poll(struct file *file, poll_table *wait)
+{
+	struct io_ring_ctx *ctx = file->private_data;
+	__poll_t mask = 0;
+
+	poll_wait(file, &ctx->cq_wait, wait);
+	smp_rmb();
+	if (ctx->sq_ring->r.tail + 1 != ctx->cached_sq_head)
+		mask |= EPOLLOUT | EPOLLWRNORM;
+	if (ctx->cq_ring->r.head != ctx->cached_cq_tail)
+		mask |= EPOLLIN | EPOLLRDNORM;
+
+	return mask;
+}
+
+static int io_uring_fasync(int fd, struct file *file, int on)
+{
+	struct io_ring_ctx *ctx = file->private_data;
+
+	return fasync_helper(fd, file, on, &ctx->cq_fasync);
+}
+
+static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
+{
+	mutex_lock(&ctx->uring_lock);
+	percpu_ref_kill(&ctx->refs);
+	mutex_unlock(&ctx->uring_lock);
+
+	wait_for_completion(&ctx->ctx_done);
+	io_ring_ctx_free(ctx);
+}
+
+static int io_uring_release(struct inode *inode, struct file *file)
+{
+	struct io_ring_ctx *ctx = file->private_data;
+
+	file->private_data = NULL;
+	io_ring_ctx_wait_and_kill(ctx);
+	return 0;
+}
+
+static int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
+{
+	loff_t offset = (loff_t) vma->vm_pgoff << PAGE_SHIFT;
+	unsigned long sz = vma->vm_end - vma->vm_start;
+	struct io_ring_ctx *ctx = file->private_data;
+	unsigned long pfn;
+	struct page *page;
+	void *ptr;
+
+	switch (offset) {
+	case IORING_OFF_SQ_RING:
+		ptr = ctx->sq_ring;
+		break;
+	case IORING_OFF_SQES:
+		ptr = ctx->sq_sqes;
+		break;
+	case IORING_OFF_CQ_RING:
+		ptr = ctx->cq_ring;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	page = virt_to_head_page(ptr);
+	if (sz > (PAGE_SIZE << compound_order(page)))
+		return -EINVAL;
+
+	pfn = virt_to_phys(ptr) >> PAGE_SHIFT;
+	return remap_pfn_range(vma, vma->vm_start, pfn, sz, vma->vm_page_prot);
+}
+
+SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
+		u32, min_complete, u32, flags, const sigset_t __user *, sig,
+		size_t, sigsz)
+{
+	struct io_ring_ctx *ctx;
+	long ret = -EBADF;
+	struct fd f;
+
+	f = fdget(fd);
+	if (!f.file)
+		return -EBADF;
+
+	ret = -EOPNOTSUPP;
+	if (f.file->f_op != &io_uring_fops)
+		goto out_fput;
+
+	ret = -ENXIO;
+	ctx = f.file->private_data;
+	if (!percpu_ref_tryget(&ctx->refs))
+		goto out_fput;
+
+	ret = -EBUSY;
+	if (!mutex_trylock(&ctx->uring_lock))
+		goto out_ctx;
+
+	ret = __io_uring_enter(ctx, to_submit, min_complete, flags, sig, sigsz);
+	mutex_unlock(&ctx->uring_lock);
+out_ctx:
+	io_ring_drop_ctx_refs(ctx, 1);
+out_fput:
+	fdput(f);
+	return ret;
+}
+
+static const struct file_operations io_uring_fops = {
+	.release	= io_uring_release,
+	.mmap		= io_uring_mmap,
+	.poll		= io_uring_poll,
+	.fasync		= io_uring_fasync,
+};
+
+static int io_allocate_scq_urings(struct io_ring_ctx *ctx,
+				  struct io_uring_params *p)
+{
+	struct io_sq_ring *sq_ring;
+	struct io_cq_ring *cq_ring;
+	size_t size;
+
+	sq_ring = io_mem_alloc(struct_size(sq_ring, array, p->sq_entries));
+	if (!sq_ring)
+		return -ENOMEM;
+
+	ctx->sq_ring = sq_ring;
+	sq_ring->ring_mask = p->sq_entries - 1;
+	sq_ring->ring_entries = p->sq_entries;
+	ctx->sq_mask = sq_ring->ring_mask;
+	ctx->sq_entries = sq_ring->ring_entries;
+
+	size = array_size(sizeof(struct io_uring_sqe), p->sq_entries);
+	if (size == SIZE_MAX)
+		return -EOVERFLOW;
+
+	ctx->sq_sqes = io_mem_alloc(size);
+	if (!ctx->sq_sqes) {
+		io_mem_free(ctx->sq_ring);
+		return -ENOMEM;
+	}
+
+	cq_ring = io_mem_alloc(struct_size(cq_ring, cqes, p->cq_entries));
+	if (!cq_ring) {
+		io_mem_free(ctx->sq_ring);
+		io_mem_free(ctx->sq_sqes);
+		return -ENOMEM;
+	}
+
+	ctx->cq_ring = cq_ring;
+	cq_ring->ring_mask = p->cq_entries - 1;
+	cq_ring->ring_entries = p->cq_entries;
+	ctx->cq_mask = cq_ring->ring_mask;
+	ctx->cq_entries = cq_ring->ring_entries;
+	return 0;
+}
+
+static int io_uring_create(unsigned entries, struct io_uring_params *p)
+{
+	struct user_struct *user = NULL;
+	struct io_ring_ctx *ctx;
+	int ret;
+
+	if (!entries || entries > IORING_MAX_ENTRIES)
+		return -EINVAL;
+
+	/*
+	 * Use twice as many entries for the CQ ring. It's possible for the
+	 * application to drive a higher depth than the size of the SQ ring,
+	 * since the sqes are only used at submission time. This allows for
+	 * some flexibility in overcommitting a bit.
+	 */
+	p->sq_entries = roundup_pow_of_two(entries);
+	p->cq_entries = 2 * p->sq_entries;
+
+	if (!capable(CAP_IPC_LOCK)) {
+		user = get_uid(current_user());
+		ret = __io_account_mem(user, ring_pages(p->sq_entries,
+							p->cq_entries));
+		if (ret) {
+			free_uid(user);
+			return ret;
+		}
+	}
+
+	ctx = io_ring_ctx_alloc(p);
+	if (!ctx) {
+		__io_unaccount_mem(user, ring_pages(p->sq_entries,
+							p->cq_entries));
+		free_uid(user);
+		return -ENOMEM;
+	}
+	ctx->user = user;
+
+	ret = io_allocate_scq_urings(ctx, p);
+	if (ret)
+		goto err;
+
+	ret = io_sq_offload_start(ctx);
+	if (ret)
+		goto err;
+
+	ret = anon_inode_getfd("[io_uring]", &io_uring_fops, ctx,
+				O_RDWR | O_CLOEXEC);
+	if (ret < 0)
+		goto err;
+
+	memset(&p->sq_off, 0, sizeof(p->sq_off));
+	p->sq_off.head = offsetof(struct io_sq_ring, r.head);
+	p->sq_off.tail = offsetof(struct io_sq_ring, r.tail);
+	p->sq_off.ring_mask = offsetof(struct io_sq_ring, ring_mask);
+	p->sq_off.ring_entries = offsetof(struct io_sq_ring, ring_entries);
+	p->sq_off.flags = offsetof(struct io_sq_ring, flags);
+	p->sq_off.dropped = offsetof(struct io_sq_ring, dropped);
+	p->sq_off.array = offsetof(struct io_sq_ring, array);
+
+	memset(&p->cq_off, 0, sizeof(p->cq_off));
+	p->cq_off.head = offsetof(struct io_cq_ring, r.head);
+	p->cq_off.tail = offsetof(struct io_cq_ring, r.tail);
+	p->cq_off.ring_mask = offsetof(struct io_cq_ring, ring_mask);
+	p->cq_off.ring_entries = offsetof(struct io_cq_ring, ring_entries);
+	p->cq_off.overflow = offsetof(struct io_cq_ring, overflow);
+	p->cq_off.cqes = offsetof(struct io_cq_ring, cqes);
+	return ret;
+err:
+	io_ring_ctx_wait_and_kill(ctx);
+	return ret;
+}
+
+/*
+ * Sets up an aio uring context, and returns the fd. Applications asks for a
+ * ring size, we return the actual sq/cq ring sizes (among other things) in the
+ * params structure passed in.
+ */
+static long io_uring_setup(u32 entries, struct io_uring_params __user *params)
+{
+	struct io_uring_params p;
+	long ret;
+	int i;
+
+	if (copy_from_user(&p, params, sizeof(p)))
+		return -EFAULT;
+	for (i = 0; i < ARRAY_SIZE(p.resv); i++) {
+		if (p.resv[i])
+			return -EINVAL;
+	}
+
+	if (p.flags)
+		return -EINVAL;
+
+	ret = io_uring_create(entries, &p);
+	if (ret < 0)
+		return ret;
+
+	if (copy_to_user(params, &p, sizeof(p)))
+		return -EFAULT;
+
+	return ret;
+}
+
+SYSCALL_DEFINE2(io_uring_setup, u32, entries,
+		struct io_uring_params __user *, params)
+{
+	return io_uring_setup(entries, params);
+}
+
+static int __init io_uring_init(void)
+{
+	req_cachep = KMEM_CACHE(io_kiocb, SLAB_HWCACHE_ALIGN | SLAB_PANIC);
+	return 0;
+};
+__initcall(io_uring_init);
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 257cccba3062..3072dbaa7869 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -69,6 +69,7 @@ struct file_handle;
 struct sigaltstack;
 struct rseq;
 union bpf_attr;
+struct io_uring_params;
 
 #include <linux/types.h>
 #include <linux/aio_abi.h>
@@ -309,6 +310,11 @@ asmlinkage long sys_io_pgetevents_time32(aio_context_t ctx_id,
 				struct io_event __user *events,
 				struct old_timespec32 __user *timeout,
 				const struct __aio_sigset *sig);
+asmlinkage long sys_io_uring_setup(u32 entries,
+				struct io_uring_params __user *p);
+asmlinkage long sys_io_uring_enter(unsigned int fd, u32 to_submit,
+				u32 min_complete, u32 flags,
+				const sigset_t __user *sig, size_t sigsz);
 
 /* fs/xattr.c */
 asmlinkage long sys_setxattr(const char __user *path, const char __user *name,
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index d90127298f12..87871e7b7ea7 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -740,9 +740,13 @@ __SC_COMP(__NR_io_pgetevents, sys_io_pgetevents, compat_sys_io_pgetevents)
 __SYSCALL(__NR_rseq, sys_rseq)
 #define __NR_kexec_file_load 294
 __SYSCALL(__NR_kexec_file_load,     sys_kexec_file_load)
+#define __NR_io_uring_setup 425
+__SYSCALL(__NR_io_uring_setup, sys_io_uring_setup)
+#define __NR_io_uring_enter 426
+__SYSCALL(__NR_io_uring_enter, sys_io_uring_enter)
 
 #undef __NR_syscalls
-#define __NR_syscalls 295
+#define __NR_syscalls 427
 
 /*
  * 32 bit systems traditionally used different
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
new file mode 100644
index 000000000000..ce65db9269a8
--- /dev/null
+++ b/include/uapi/linux/io_uring.h
@@ -0,0 +1,96 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * Header file for the io_uring interface.
+ *
+ * Copyright (C) 2019 Jens Axboe
+ * Copyright (C) 2019 Christoph Hellwig
+ */
+#ifndef LINUX_IO_URING_H
+#define LINUX_IO_URING_H
+
+#include <linux/fs.h>
+#include <linux/types.h>
+
+#define IORING_MAX_ENTRIES	4096
+
+/*
+ * IO submission data structure (Submission Queue Entry)
+ */
+struct io_uring_sqe {
+	__u8	opcode;		/* type of operation for this sqe */
+	__u8	flags;		/* as of now unused */
+	__u16	ioprio;		/* ioprio for the request */
+	__s32	fd;		/* file descriptor to do IO on */
+	__u64	off;		/* offset into file */
+	__u64	addr;		/* pointer to buffer or iovecs */
+	__u32	len;		/* buffer size or number of iovecs */
+	union {
+		__kernel_rwf_t	rw_flags;
+		__u32		__resv;
+	};
+	__u64	user_data;	/* data to be passed back at completion time */
+	__u64	__pad2[3];
+};
+
+#define IORING_OP_NOP		0
+#define IORING_OP_READV		1
+#define IORING_OP_WRITEV	2
+
+/*
+ * IO completion data structure (Completion Queue Entry)
+ */
+struct io_uring_cqe {
+	__u64	user_data;	/* sqe->data submission passed back */
+	__s32	res;		/* result code for this event */
+	__u32	flags;
+};
+
+/*
+ * Magic offsets for the application to mmap the data it needs
+ */
+#define IORING_OFF_SQ_RING		0ULL
+#define IORING_OFF_CQ_RING		0x8000000ULL
+#define IORING_OFF_SQES			0x10000000ULL
+
+/*
+ * Filled with the offset for mmap(2)
+ */
+struct io_sqring_offsets {
+	__u32 head;
+	__u32 tail;
+	__u32 ring_mask;
+	__u32 ring_entries;
+	__u32 flags;
+	__u32 dropped;
+	__u32 array;
+	__u32 resv[3];
+};
+
+struct io_cqring_offsets {
+	__u32 head;
+	__u32 tail;
+	__u32 ring_mask;
+	__u32 ring_entries;
+	__u32 overflow;
+	__u32 cqes;
+	__u32 resv[4];
+};
+
+/*
+ * io_uring_enter(2) flags
+ */
+#define IORING_ENTER_GETEVENTS	(1 << 0)
+
+/*
+ * Passed in for io_uring_setup(2). Copied back with updated info on success
+ */
+struct io_uring_params {
+	__u32 sq_entries;
+	__u32 cq_entries;
+	__u32 flags;
+	__u16 resv[10];
+	struct io_sqring_offsets sq_off;
+	struct io_cqring_offsets cq_off;
+};
+
+#endif
diff --git a/init/Kconfig b/init/Kconfig
index 513fa544a134..0cf723867e69 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1403,6 +1403,15 @@ config AIO
 	  by some high performance threaded applications. Disabling
 	  this option saves about 7k.
 
+config IO_URING
+	bool "Enable IO uring support" if EXPERT
+	select ANON_INODES
+	default y
+	help
+	  This option enables support for the io_uring interface, enabling
+	  applications to submit and completion IO through submission and
+	  completion rings that are shared between the kernel and application.
+
 config ADVISE_SYSCALLS
 	bool "Enable madvise/fadvise syscalls" if EXPERT
 	default y
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index ab9d0e3c6d50..ee5e523564bb 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -46,6 +46,8 @@ COND_SYSCALL(io_getevents);
 COND_SYSCALL(io_pgetevents);
 COND_SYSCALL_COMPAT(io_getevents);
 COND_SYSCALL_COMPAT(io_pgetevents);
+COND_SYSCALL(io_uring_setup);
+COND_SYSCALL(io_uring_enter);
 
 /* fs/xattr.c */
 
-- 
2.17.1

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 06/18] io_uring: add fsync support
  2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
                   ` (4 preceding siblings ...)
  2019-01-28 21:35 ` [PATCH 05/18] Add io_uring IO interface Jens Axboe
@ 2019-01-28 21:35 ` Jens Axboe
  2019-01-28 21:35 ` [PATCH 07/18] io_uring: support for IO polling Jens Axboe
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi, Jens Axboe

From: Christoph Hellwig <hch@lst.de>

Add a new fsync opcode, which either syncs a range if one is passed,
or the whole file if the offset and length fields are both cleared
to zero.  A flag is provided to use fdatasync semantics, that is only
force out metadata which is required to retrieve the file data, but
not others like metadata.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c                 | 34 ++++++++++++++++++++++++++++++++++
 include/uapi/linux/io_uring.h |  8 +++++++-
 2 files changed, 41 insertions(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 309eed3629d2..7d2e6db08b05 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -4,6 +4,7 @@
  * supporting fast/efficient IO.
  *
  * Copyright (C) 2018-2019 Jens Axboe
+ * Copyright (c) 2018-2019 Christoph Hellwig
  */
 #include <linux/kernel.h>
 #include <linux/init.h>
@@ -466,6 +467,36 @@ static int io_nop(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	return 0;
 }
 
+static int io_fsync(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+		    bool force_nonblock)
+{
+	struct io_ring_ctx *ctx = req->ctx;
+	loff_t end = sqe->off + sqe->len;
+	struct file *file;
+	int ret;
+
+	/* fsync always requires a blocking context */
+	if (force_nonblock)
+		return -EAGAIN;
+
+	if (unlikely(sqe->addr || sqe->ioprio))
+		return -EINVAL;
+	if (unlikely(sqe->fsync_flags & ~IORING_FSYNC_DATASYNC))
+		return -EINVAL;
+
+	file = fget(sqe->fd);
+	if (unlikely(!file))
+		return -EBADF;
+
+	ret = vfs_fsync_range(file, sqe->off, end > 0 ? end : LLONG_MAX,
+			sqe->fsync_flags & IORING_FSYNC_DATASYNC);
+
+	fput(file);
+	io_cqring_add_event(ctx, sqe->user_data, ret, 0);
+	io_free_req(req);
+	return 0;
+}
+
 static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 			   struct sqe_submit *s, bool force_nonblock)
 {
@@ -487,6 +518,9 @@ static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	case IORING_OP_WRITEV:
 		ret = io_write(req, sqe, force_nonblock);
 		break;
+	case IORING_OP_FSYNC:
+		ret = io_fsync(req, sqe, force_nonblock);
+		break;
 	default:
 		ret = -EINVAL;
 		break;
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index ce65db9269a8..ca503ded73e3 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -26,7 +26,7 @@ struct io_uring_sqe {
 	__u32	len;		/* buffer size or number of iovecs */
 	union {
 		__kernel_rwf_t	rw_flags;
-		__u32		__resv;
+		__u32		fsync_flags;
 	};
 	__u64	user_data;	/* data to be passed back at completion time */
 	__u64	__pad2[3];
@@ -35,6 +35,12 @@ struct io_uring_sqe {
 #define IORING_OP_NOP		0
 #define IORING_OP_READV		1
 #define IORING_OP_WRITEV	2
+#define IORING_OP_FSYNC		3
+
+/*
+ * sqe->fsync_flags
+ */
+#define IORING_FSYNC_DATASYNC	(1 << 0)
 
 /*
  * IO completion data structure (Completion Queue Entry)
-- 
2.17.1

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 07/18] io_uring: support for IO polling
  2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
                   ` (5 preceding siblings ...)
  2019-01-28 21:35 ` [PATCH 06/18] io_uring: add fsync support Jens Axboe
@ 2019-01-28 21:35 ` Jens Axboe
  2019-01-29 17:24   ` Christoph Hellwig
  2019-01-28 21:35 ` [PATCH 08/18] fs: add fget_many() and fput_many() Jens Axboe
                   ` (10 subsequent siblings)
  17 siblings, 1 reply; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi, Jens Axboe

Add support for a polled io_uring context. When a read or write is
submitted to a polled context, the application must poll for completions
on the CQ ring through io_uring_enter(2). Polled IO may not generate
IRQ completions, hence they need to be actively found by the application
itself.

To use polling, io_uring_setup() must be used with the
IORING_SETUP_IOPOLL flag being set. It is illegal to mix and match
polled and non-polled IO on an io_uring.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c                 | 262 ++++++++++++++++++++++++++++++++--
 include/uapi/linux/io_uring.h |   5 +
 2 files changed, 256 insertions(+), 11 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 7d2e6db08b05..ed5b605a1748 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -100,6 +100,8 @@ struct io_ring_ctx {
 
 	struct {
 		spinlock_t		completion_lock;
+		bool			poll_multi_file;
+		struct list_head	poll_list;
 	} ____cacheline_aligned_in_smp;
 };
 
@@ -118,12 +120,16 @@ struct io_kiocb {
 	struct list_head	list;
 	unsigned int		flags;
 #define REQ_F_FORCE_NONBLOCK	1	/* inline submission attempt */
+#define REQ_F_IOPOLL_COMPLETED	2	/* polled IO has completed */
+#define REQ_F_IOPOLL_EAGAIN	4	/* submission got EAGAIN */
 	u64			user_data;
+	u64			res;
 
 	struct work_struct	work;
 };
 
 #define IO_PLUG_THRESHOLD		2
+#define IO_IOPOLL_BATCH			8
 
 static struct kmem_cache *req_cachep;
 
@@ -155,6 +161,7 @@ static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
 	mutex_init(&ctx->uring_lock);
 	init_waitqueue_head(&ctx->wait);
 	spin_lock_init(&ctx->completion_lock);
+	INIT_LIST_HEAD(&ctx->poll_list);
 	return ctx;
 }
 
@@ -190,8 +197,8 @@ static struct io_uring_cqe *io_get_cqring(struct io_ring_ctx *ctx)
 	return &ring->cqes[tail & ctx->cq_mask];
 }
 
-static void __io_cqring_add_event(struct io_ring_ctx *ctx, u64 ki_user_data,
-				  long res, unsigned ev_flags)
+static void io_cqring_fill_event(struct io_ring_ctx *ctx, u64 ki_user_data,
+				 long res, unsigned ev_flags)
 {
 	struct io_uring_cqe *cqe;
 
@@ -205,9 +212,15 @@ static void __io_cqring_add_event(struct io_ring_ctx *ctx, u64 ki_user_data,
 		cqe->user_data = ki_user_data;
 		cqe->res = res;
 		cqe->flags = ev_flags;
-		io_commit_cqring(ctx);
 	} else
 		ctx->cq_ring->overflow++;
+}
+
+static void __io_cqring_add_event(struct io_ring_ctx *ctx, u64 ki_user_data,
+				  long res, unsigned ev_flags)
+{
+	io_cqring_fill_event(ctx, ki_user_data, res, ev_flags);
+	io_commit_cqring(ctx);
 
 	if (waitqueue_active(&ctx->wait))
 		wake_up(&ctx->wait);
@@ -249,12 +262,158 @@ static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx)
 	return NULL;
 }
 
+static void io_free_req_many(struct io_ring_ctx *ctx, void **reqs, int *nr)
+{
+	if (*nr) {
+		kmem_cache_free_bulk(req_cachep, *nr, reqs);
+		io_ring_drop_ctx_refs(ctx, *nr);
+		*nr = 0;
+	}
+}
+
 static void io_free_req(struct io_kiocb *req)
 {
 	io_ring_drop_ctx_refs(req->ctx, 1);
 	kmem_cache_free(req_cachep, req);
 }
 
+/*
+ * Find and free completed poll iocbs
+ */
+static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
+			       struct list_head *done)
+{
+	void *reqs[IO_IOPOLL_BATCH];
+	struct io_kiocb *req;
+	int to_free = 0;
+
+	while (!list_empty(done)) {
+		req = list_first_entry(done, struct io_kiocb, list);
+		list_del(&req->list);
+
+		io_cqring_fill_event(ctx, req->user_data, req->res, 0);
+
+		reqs[to_free++] = req;
+		(*nr_events)++;
+
+		fput(req->rw.ki_filp);
+		if (to_free == ARRAY_SIZE(reqs))
+			io_free_req_many(ctx, reqs, &to_free);
+	}
+	io_commit_cqring(ctx);
+
+	if (to_free)
+		io_free_req_many(ctx, reqs, &to_free);
+}
+
+static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
+			long min)
+{
+	struct io_kiocb *req, *tmp;
+	LIST_HEAD(done);
+	bool spin;
+	int ret;
+
+	/*
+	 * Only spin for completions if we don't have multiple devices hanging
+	 * off our complete list, and we're under the requested amount.
+	 */
+	spin = !ctx->poll_multi_file && (*nr_events < min);
+
+	ret = 0;
+	list_for_each_entry_safe(req, tmp, &ctx->poll_list, list) {
+		struct kiocb *kiocb = &req->rw;
+
+		/*
+		 * Move completed entries to our local list. If we find a
+		 * request that requires polling, break out and complete
+		 * the done list first, if we have entries there.
+		 */
+		if (req->flags & REQ_F_IOPOLL_COMPLETED) {
+			list_move_tail(&req->list, &done);
+			continue;
+		}
+		if (!list_empty(&done))
+			break;
+
+		ret = kiocb->ki_filp->f_op->iopoll(kiocb, spin);
+		if (ret < 0)
+			break;
+
+		if (ret && spin)
+			spin = false;
+		ret = 0;
+	}
+
+	if (!list_empty(&done))
+		io_iopoll_complete(ctx, nr_events, &done);
+
+	return ret;
+}
+
+/*
+ * Poll for a mininum of 'min' events. Note that if min == 0 we consider that a
+ * non-spinning poll check - we'll still enter the driver poll loop, but only
+ * as a non-spinning completion check.
+ */
+static int io_iopoll_getevents(struct io_ring_ctx *ctx, unsigned int *nr_events,
+				long min)
+{
+	int ret;
+
+	do {
+		if (list_empty(&ctx->poll_list))
+			return 0;
+
+		ret = io_do_iopoll(ctx, nr_events, min);
+		if (ret < 0)
+			break;
+	} while (min && *nr_events < min);
+
+	if (ret < 0)
+		return ret;
+
+	return *nr_events < min;
+}
+
+/*
+ * We can't just wait for polled events to come to us, we have to actively
+ * find and complete them.
+ */
+static void io_iopoll_reap_events(struct io_ring_ctx *ctx)
+{
+	if (!(ctx->flags & IORING_SETUP_IOPOLL))
+		return;
+
+	mutex_lock(&ctx->uring_lock);
+	while (!list_empty(&ctx->poll_list)) {
+		unsigned int nr_events = 0;
+
+		io_iopoll_getevents(ctx, &nr_events, 1);
+	}
+	mutex_unlock(&ctx->uring_lock);
+}
+
+static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned *nr_events,
+			   long min)
+{
+	int ret = 0;
+
+	do {
+		int tmin = 0;
+
+		if (*nr_events < min)
+			tmin = min - *nr_events;
+
+		ret = io_iopoll_getevents(ctx, nr_events, tmin);
+		if (ret <= 0)
+			break;
+		ret = 0;
+	} while (!*nr_events || !need_resched());
+
+	return ret;
+}
+
 static void kiocb_end_write(struct kiocb *kiocb)
 {
 	if (kiocb->ki_flags & IOCB_WRITE) {
@@ -281,9 +440,60 @@ static void io_complete_rw(struct kiocb *kiocb, long res, long res2)
 	io_free_req(req);
 }
 
+static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long res2)
+{
+	struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw);
+
+	kiocb_end_write(kiocb);
+
+	if (unlikely(res == -EAGAIN)) {
+		req->flags |= REQ_F_IOPOLL_EAGAIN;
+	} else {
+		req->flags |= REQ_F_IOPOLL_COMPLETED;
+		req->res = res;
+	}
+}
+
+/*
+ * After the iocb has been issued, it's safe to be found on the poll list.
+ * Adding the kiocb to the list AFTER submission ensures that we don't
+ * find it from a io_iopoll_getevents() thread before the issuer is done
+ * accessing the kiocb cookie.
+ */
+static void io_iopoll_req_issued(struct io_kiocb *req)
+{
+	struct io_ring_ctx *ctx = req->ctx;
+
+	/*
+	 * Track whether we have multiple files in our lists. This will impact
+	 * how we do polling eventually, not spinning if we're on potentially
+	 * different devices.
+	 */
+	if (list_empty(&ctx->poll_list)) {
+		ctx->poll_multi_file = false;
+	} else if (!ctx->poll_multi_file) {
+		struct io_kiocb *list_req;
+
+		list_req = list_first_entry(&ctx->poll_list, struct io_kiocb,
+						list);
+		if (list_req->rw.ki_filp != req->rw.ki_filp)
+			ctx->poll_multi_file = true;
+	}
+
+	/*
+	 * For fast devices, IO may have already completed. If it has, add
+	 * it to the front so we find it first.
+	 */
+	if (req->flags & REQ_F_IOPOLL_COMPLETED)
+		list_add(&req->list, &ctx->poll_list);
+	else
+		list_add_tail(&req->list, &ctx->poll_list);
+}
+
 static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 		      bool force_nonblock)
 {
+	struct io_ring_ctx *ctx = req->ctx;
 	struct kiocb *kiocb = &req->rw;
 	int ret;
 
@@ -309,12 +519,21 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 		kiocb->ki_flags |= IOCB_NOWAIT;
 		req->flags |= REQ_F_FORCE_NONBLOCK;
 	}
-	if (kiocb->ki_flags & IOCB_HIPRI) {
-		ret = -EINVAL;
-		goto out_fput;
-	}
+	if (ctx->flags & IORING_SETUP_IOPOLL) {
+		ret = -EOPNOTSUPP;
+		if (!(kiocb->ki_flags & IOCB_DIRECT) ||
+		    !kiocb->ki_filp->f_op->iopoll)
+			goto out_fput;
 
-	kiocb->ki_complete = io_complete_rw;
+		kiocb->ki_flags |= IOCB_HIPRI;
+		kiocb->ki_complete = io_complete_rw_iopoll;
+	} else {
+		if (kiocb->ki_flags & IOCB_HIPRI) {
+			ret = -EINVAL;
+			goto out_fput;
+		}
+		kiocb->ki_complete = io_complete_rw;
+	}
 	return 0;
 out_fput:
 	fput(kiocb->ki_filp);
@@ -462,6 +681,9 @@ static int io_nop(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 {
 	struct io_ring_ctx *ctx = req->ctx;
 
+	if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
+		return -EINVAL;
+
 	io_cqring_add_event(ctx, sqe->user_data, 0, 0);
 	io_free_req(req);
 	return 0;
@@ -479,6 +701,8 @@ static int io_fsync(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	if (force_nonblock)
 		return -EAGAIN;
 
+	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+		return -EINVAL;
 	if (unlikely(sqe->addr || sqe->ioprio))
 		return -EINVAL;
 	if (unlikely(sqe->fsync_flags & ~IORING_FSYNC_DATASYNC))
@@ -526,7 +750,16 @@ static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		break;
 	}
 
-	return ret;
+	if (ret)
+		return ret;
+
+	if (ctx->flags & IORING_SETUP_IOPOLL) {
+		if (req->flags & REQ_F_IOPOLL_EAGAIN)
+			return -EAGAIN;
+		io_iopoll_req_issued(req);
+	}
+
+	return 0;
 }
 
 static void io_sq_wq_submit_work(struct work_struct *work)
@@ -734,6 +967,8 @@ static int __io_uring_enter(struct io_ring_ctx *ctx, unsigned to_submit,
 			return submitted;
 	}
 	if (flags & IORING_ENTER_GETEVENTS) {
+		unsigned nr_events = 0;
+
 		/*
 		 * The application could have included the 'to_submit' count
 		 * in how many events it wanted to wait for. If we failed to
@@ -743,7 +978,10 @@ static int __io_uring_enter(struct io_ring_ctx *ctx, unsigned to_submit,
 		if (submitted < to_submit)
 			min_complete = min_t(unsigned, submitted, min_complete);
 
-		ret = io_cqring_wait(ctx, min_complete, sig, sigsz);
+		if (ctx->flags & IORING_SETUP_IOPOLL)
+			ret = io_iopoll_check(ctx, &nr_events, min_complete);
+		else
+			ret = io_cqring_wait(ctx, min_complete, sig, sigsz);
 	}
 
 	return submitted ? submitted : ret;
@@ -842,6 +1080,7 @@ static unsigned long ring_pages(unsigned sq_entries, unsigned cq_entries)
 static void io_ring_ctx_free(struct io_ring_ctx *ctx)
 {
 	destroy_workqueue(ctx->sqo_wq);
+	io_iopoll_reap_events(ctx);
 
 	io_mem_free(ctx->sq_ring);
 	io_mem_free(ctx->sq_sqes);
@@ -880,6 +1119,7 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
 	percpu_ref_kill(&ctx->refs);
 	mutex_unlock(&ctx->uring_lock);
 
+	io_iopoll_reap_events(ctx);
 	wait_for_completion(&ctx->ctx_done);
 	io_ring_ctx_free(ctx);
 }
@@ -1097,7 +1337,7 @@ static long io_uring_setup(u32 entries, struct io_uring_params __user *params)
 			return -EINVAL;
 	}
 
-	if (p.flags)
+	if (p.flags & ~IORING_SETUP_IOPOLL)
 		return -EINVAL;
 
 	ret = io_uring_create(entries, &p);
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index ca503ded73e3..4fc5fbd07688 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -32,6 +32,11 @@ struct io_uring_sqe {
 	__u64	__pad2[3];
 };
 
+/*
+ * io_uring_setup() flags
+ */
+#define IORING_SETUP_IOPOLL	(1 << 0)	/* io_context is polled */
+
 #define IORING_OP_NOP		0
 #define IORING_OP_READV		1
 #define IORING_OP_WRITEV	2
-- 
2.17.1

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 08/18] fs: add fget_many() and fput_many()
  2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
                   ` (6 preceding siblings ...)
  2019-01-28 21:35 ` [PATCH 07/18] io_uring: support for IO polling Jens Axboe
@ 2019-01-28 21:35 ` Jens Axboe
  2019-01-28 21:35 ` [PATCH 09/18] io_uring: use fget/fput_many() for file references Jens Axboe
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi, Jens Axboe

Some uses cases repeatedly get and put references to the same file, but
the only exposed interface is doing these one at the time. As each of
these entail an atomic inc or dec on a shared structure, that cost can
add up.

Add fget_many(), which works just like fget(), except it takes an
argument for how many references to get on the file. Ditto fput_many(),
which can drop an arbitrary number of references to a file.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/file.c            | 15 ++++++++++-----
 fs/file_table.c      |  9 +++++++--
 include/linux/file.h |  2 ++
 include/linux/fs.h   |  4 +++-
 4 files changed, 22 insertions(+), 8 deletions(-)

diff --git a/fs/file.c b/fs/file.c
index 3209ee271c41..97df385d6ab0 100644
--- a/fs/file.c
+++ b/fs/file.c
@@ -705,7 +705,7 @@ void do_close_on_exec(struct files_struct *files)
 	spin_unlock(&files->file_lock);
 }
 
-static struct file *__fget(unsigned int fd, fmode_t mask)
+static struct file *__fget(unsigned int fd, fmode_t mask, unsigned int refs)
 {
 	struct files_struct *files = current->files;
 	struct file *file;
@@ -720,7 +720,7 @@ static struct file *__fget(unsigned int fd, fmode_t mask)
 		 */
 		if (file->f_mode & mask)
 			file = NULL;
-		else if (!get_file_rcu(file))
+		else if (!get_file_rcu_many(file, refs))
 			goto loop;
 	}
 	rcu_read_unlock();
@@ -728,15 +728,20 @@ static struct file *__fget(unsigned int fd, fmode_t mask)
 	return file;
 }
 
+struct file *fget_many(unsigned int fd, unsigned int refs)
+{
+	return __fget(fd, FMODE_PATH, refs);
+}
+
 struct file *fget(unsigned int fd)
 {
-	return __fget(fd, FMODE_PATH);
+	return __fget(fd, FMODE_PATH, 1);
 }
 EXPORT_SYMBOL(fget);
 
 struct file *fget_raw(unsigned int fd)
 {
-	return __fget(fd, 0);
+	return __fget(fd, 0, 1);
 }
 EXPORT_SYMBOL(fget_raw);
 
@@ -767,7 +772,7 @@ static unsigned long __fget_light(unsigned int fd, fmode_t mask)
 			return 0;
 		return (unsigned long)file;
 	} else {
-		file = __fget(fd, mask);
+		file = __fget(fd, mask, 1);
 		if (!file)
 			return 0;
 		return FDPUT_FPUT | (unsigned long)file;
diff --git a/fs/file_table.c b/fs/file_table.c
index 5679e7fcb6b0..155d7514a094 100644
--- a/fs/file_table.c
+++ b/fs/file_table.c
@@ -326,9 +326,9 @@ void flush_delayed_fput(void)
 
 static DECLARE_DELAYED_WORK(delayed_fput_work, delayed_fput);
 
-void fput(struct file *file)
+void fput_many(struct file *file, unsigned int refs)
 {
-	if (atomic_long_dec_and_test(&file->f_count)) {
+	if (atomic_long_sub_and_test(refs, &file->f_count)) {
 		struct task_struct *task = current;
 
 		if (likely(!in_interrupt() && !(task->flags & PF_KTHREAD))) {
@@ -347,6 +347,11 @@ void fput(struct file *file)
 	}
 }
 
+void fput(struct file *file)
+{
+	fput_many(file, 1);
+}
+
 /*
  * synchronous analog of fput(); for kernel threads that might be needed
  * in some umount() (and thus can't use flush_delayed_fput() without
diff --git a/include/linux/file.h b/include/linux/file.h
index 6b2fb032416c..3fcddff56bc4 100644
--- a/include/linux/file.h
+++ b/include/linux/file.h
@@ -13,6 +13,7 @@
 struct file;
 
 extern void fput(struct file *);
+extern void fput_many(struct file *, unsigned int);
 
 struct file_operations;
 struct vfsmount;
@@ -44,6 +45,7 @@ static inline void fdput(struct fd fd)
 }
 
 extern struct file *fget(unsigned int fd);
+extern struct file *fget_many(unsigned int fd, unsigned int refs);
 extern struct file *fget_raw(unsigned int fd);
 extern unsigned long __fdget(unsigned int fd);
 extern unsigned long __fdget_raw(unsigned int fd);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index ccb0b7a63aa5..acaad78b6781 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -952,7 +952,9 @@ static inline struct file *get_file(struct file *f)
 	atomic_long_inc(&f->f_count);
 	return f;
 }
-#define get_file_rcu(x) atomic_long_inc_not_zero(&(x)->f_count)
+#define get_file_rcu_many(x, cnt)	\
+	atomic_long_add_unless(&(x)->f_count, (cnt), 0)
+#define get_file_rcu(x) get_file_rcu_many((x), 1)
 #define fput_atomic(x)	atomic_long_add_unless(&(x)->f_count, -1, 1)
 #define file_count(x)	atomic_long_read(&(x)->f_count)
 
-- 
2.17.1

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 09/18] io_uring: use fget/fput_many() for file references
  2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
                   ` (7 preceding siblings ...)
  2019-01-28 21:35 ` [PATCH 08/18] fs: add fget_many() and fput_many() Jens Axboe
@ 2019-01-28 21:35 ` Jens Axboe
  2019-01-28 21:56   ` Jann Horn
  2019-01-28 21:35 ` [PATCH 10/18] io_uring: batch io_kiocb allocation Jens Axboe
                   ` (8 subsequent siblings)
  17 siblings, 1 reply; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi, Jens Axboe

Add a separate io_submit_state structure, to cache some of the things
we need for IO submission.

One such example is file reference batching. io_submit_state. We get as
many references as the number of sqes we are submitting, and drop
unused ones if we end up switching files. The assumption here is that
we're usually only dealing with one fd, and if there are multiple,
hopefuly they are at least somewhat ordered. Could trivially be extended
to cover multiple fds, if needed.

On the completion side we do the same thing, except this is trivially
done just locally in io_iopoll_reap().

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 139 ++++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 118 insertions(+), 21 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index ed5b605a1748..25bdd719690d 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -131,6 +131,19 @@ struct io_kiocb {
 #define IO_PLUG_THRESHOLD		2
 #define IO_IOPOLL_BATCH			8
 
+struct io_submit_state {
+	struct blk_plug plug;
+
+	/*
+	 * File reference cache
+	 */
+	struct file *file;
+	unsigned int fd;
+	unsigned int has_refs;
+	unsigned int used_refs;
+	unsigned int ios_left;
+};
+
 static struct kmem_cache *req_cachep;
 
 static const struct file_operations io_uring_fops;
@@ -284,9 +297,11 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
 			       struct list_head *done)
 {
 	void *reqs[IO_IOPOLL_BATCH];
+	int file_count, to_free;
+	struct file *file = NULL;
 	struct io_kiocb *req;
-	int to_free = 0;
 
+	file_count = to_free = 0;
 	while (!list_empty(done)) {
 		req = list_first_entry(done, struct io_kiocb, list);
 		list_del(&req->list);
@@ -296,12 +311,28 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
 		reqs[to_free++] = req;
 		(*nr_events)++;
 
-		fput(req->rw.ki_filp);
+		/*
+		 * Batched puts of the same file, to avoid dirtying the
+		 * file usage count multiple times, if avoidable.
+		 */
+		if (!file) {
+			file = req->rw.ki_filp;
+			file_count = 1;
+		} else if (file == req->rw.ki_filp) {
+			file_count++;
+		} else {
+			fput_many(file, file_count);
+			file = req->rw.ki_filp;
+			file_count = 1;
+		}
+
 		if (to_free == ARRAY_SIZE(reqs))
 			io_free_req_many(ctx, reqs, &to_free);
 	}
 	io_commit_cqring(ctx);
 
+	if (file)
+		fput_many(file, file_count);
 	if (to_free)
 		io_free_req_many(ctx, reqs, &to_free);
 }
@@ -490,14 +521,56 @@ static void io_iopoll_req_issued(struct io_kiocb *req)
 		list_add_tail(&req->list, &ctx->poll_list);
 }
 
+static void io_file_put(struct io_submit_state *state, struct file *file)
+{
+	if (!state) {
+		fput(file);
+	} else if (state->file) {
+		int diff = state->has_refs - state->used_refs;
+
+		if (diff)
+			fput_many(state->file, diff);
+		state->file = NULL;
+	}
+}
+
+/*
+ * Get as many references to a file as we have IOs left in this submission,
+ * assuming most submissions are for one file, or at least that each file
+ * has more than one submission.
+ */
+static struct file *io_file_get(struct io_submit_state *state, int fd)
+{
+	if (!state)
+		return fget(fd);
+
+	if (state->file) {
+		if (state->fd == fd) {
+			state->used_refs++;
+			state->ios_left--;
+			return state->file;
+		}
+		io_file_put(state, NULL);
+	}
+	state->file = fget_many(fd, state->ios_left);
+	if (!state->file)
+		return NULL;
+
+	state->fd = fd;
+	state->has_refs = state->ios_left;
+	state->used_refs = 1;
+	state->ios_left--;
+	return state->file;
+}
+
 static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
-		      bool force_nonblock)
+		      bool force_nonblock, struct io_submit_state *state)
 {
 	struct io_ring_ctx *ctx = req->ctx;
 	struct kiocb *kiocb = &req->rw;
 	int ret;
 
-	kiocb->ki_filp = fget(sqe->fd);
+	kiocb->ki_filp = io_file_get(state, sqe->fd);
 	if (unlikely(!kiocb->ki_filp))
 		return -EBADF;
 	kiocb->ki_pos = sqe->off;
@@ -536,7 +609,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	}
 	return 0;
 out_fput:
-	fput(kiocb->ki_filp);
+	io_file_put(state, kiocb->ki_filp);
 	return ret;
 }
 
@@ -577,7 +650,7 @@ static int io_import_iovec(struct io_ring_ctx *ctx, int rw,
 }
 
 static ssize_t io_read(struct io_kiocb *req, const struct io_uring_sqe *sqe,
-		       bool force_nonblock)
+		       bool force_nonblock, struct io_submit_state *state)
 {
 	struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
 	struct kiocb *kiocb = &req->rw;
@@ -585,7 +658,7 @@ static ssize_t io_read(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	struct file *file;
 	ssize_t ret;
 
-	ret = io_prep_rw(req, sqe, force_nonblock);
+	ret = io_prep_rw(req, sqe, force_nonblock, state);
 	if (ret)
 		return ret;
 	file = kiocb->ki_filp;
@@ -620,7 +693,7 @@ static ssize_t io_read(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 }
 
 static ssize_t io_write(struct io_kiocb *req, const struct io_uring_sqe *sqe,
-			bool force_nonblock)
+			bool force_nonblock, struct io_submit_state *state)
 {
 	struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
 	struct kiocb *kiocb = &req->rw;
@@ -628,7 +701,7 @@ static ssize_t io_write(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	struct file *file;
 	ssize_t ret;
 
-	ret = io_prep_rw(req, sqe, force_nonblock);
+	ret = io_prep_rw(req, sqe, force_nonblock, state);
 	if (ret)
 		return ret;
 	file = kiocb->ki_filp;
@@ -722,7 +795,8 @@ static int io_fsync(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 }
 
 static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
-			   struct sqe_submit *s, bool force_nonblock)
+			   struct sqe_submit *s, bool force_nonblock,
+			   struct io_submit_state *state)
 {
 	const struct io_uring_sqe *sqe = s->sqe;
 	ssize_t ret;
@@ -737,10 +811,10 @@ static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		ret = io_nop(req, sqe);
 		break;
 	case IORING_OP_READV:
-		ret = io_read(req, sqe, force_nonblock);
+		ret = io_read(req, sqe, force_nonblock, state);
 		break;
 	case IORING_OP_WRITEV:
-		ret = io_write(req, sqe, force_nonblock);
+		ret = io_write(req, sqe, force_nonblock, state);
 		break;
 	case IORING_OP_FSYNC:
 		ret = io_fsync(req, sqe, force_nonblock);
@@ -786,7 +860,7 @@ static void io_sq_wq_submit_work(struct work_struct *work)
 	use_mm(ctx->sqo_mm);
 	set_fs(USER_DS);
 
-	ret = __io_submit_sqe(ctx, req, s, false);
+	ret = __io_submit_sqe(ctx, req, s, false, NULL);
 
 	set_fs(old_fs);
 	unuse_mm(ctx->sqo_mm);
@@ -799,7 +873,8 @@ static void io_sq_wq_submit_work(struct work_struct *work)
 	current->files = old_files;
 }
 
-static int io_submit_sqe(struct io_ring_ctx *ctx, struct sqe_submit *s)
+static int io_submit_sqe(struct io_ring_ctx *ctx, struct sqe_submit *s,
+			 struct io_submit_state *state)
 {
 	struct io_kiocb *req;
 	ssize_t ret;
@@ -812,7 +887,7 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct sqe_submit *s)
 	if (unlikely(!req))
 		return -EAGAIN;
 
-	ret = __io_submit_sqe(ctx, req, s, true);
+	ret = __io_submit_sqe(ctx, req, s, true, state);
 	if (ret == -EAGAIN) {
 		memcpy(&req->submit, s, sizeof(*s));
 		INIT_WORK(&req->work, io_sq_wq_submit_work);
@@ -825,6 +900,26 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct sqe_submit *s)
 	return ret;
 }
 
+/*
+ * Batched submission is done, ensure local IO is flushed out.
+ */
+static void io_submit_state_end(struct io_submit_state *state)
+{
+	blk_finish_plug(&state->plug);
+	io_file_put(state, NULL);
+}
+
+/*
+ * Start submission side cache.
+ */
+static void io_submit_state_start(struct io_submit_state *state,
+				  struct io_ring_ctx *ctx, unsigned max_ios)
+{
+	blk_start_plug(&state->plug);
+	state->file = NULL;
+	state->ios_left = max_ios;
+}
+
 static void io_commit_sqring(struct io_ring_ctx *ctx)
 {
 	struct io_sq_ring *ring = ctx->sq_ring;
@@ -879,11 +974,13 @@ static bool io_get_sqring(struct io_ring_ctx *ctx, struct sqe_submit *s)
 
 static int io_ring_submit(struct io_ring_ctx *ctx, unsigned int to_submit)
 {
+	struct io_submit_state state, *statep = NULL;
 	int i, ret = 0, submit = 0;
-	struct blk_plug plug;
 
-	if (to_submit > IO_PLUG_THRESHOLD)
-		blk_start_plug(&plug);
+	if (to_submit > IO_PLUG_THRESHOLD) {
+		io_submit_state_start(&state, ctx, to_submit);
+		statep = &state;
+	}
 
 	for (i = 0; i < to_submit; i++) {
 		struct sqe_submit s;
@@ -891,7 +988,7 @@ static int io_ring_submit(struct io_ring_ctx *ctx, unsigned int to_submit)
 		if (!io_get_sqring(ctx, &s))
 			break;
 
-		ret = io_submit_sqe(ctx, &s);
+		ret = io_submit_sqe(ctx, &s, statep);
 		if (ret) {
 			io_drop_sqring(ctx);
 			break;
@@ -901,8 +998,8 @@ static int io_ring_submit(struct io_ring_ctx *ctx, unsigned int to_submit)
 	}
 	io_commit_sqring(ctx);
 
-	if (to_submit > IO_PLUG_THRESHOLD)
-		blk_finish_plug(&plug);
+	if (statep)
+		io_submit_state_end(statep);
 
 	return submit ? submit : ret;
 }
-- 
2.17.1

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 10/18] io_uring: batch io_kiocb allocation
  2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
                   ` (8 preceding siblings ...)
  2019-01-28 21:35 ` [PATCH 09/18] io_uring: use fget/fput_many() for file references Jens Axboe
@ 2019-01-28 21:35 ` Jens Axboe
  2019-01-29 17:26   ` Christoph Hellwig
  2019-01-28 21:35 ` [PATCH 11/18] block: implement bio helper to add iter bvec pages to bio Jens Axboe
                   ` (7 subsequent siblings)
  17 siblings, 1 reply; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi, Jens Axboe

Similarly to how we use the state->ios_left to know how many references
to get to a file, we can use it to allocate the io_kiocb's we need in
bulk.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 39 ++++++++++++++++++++++++++++++++++++---
 1 file changed, 36 insertions(+), 3 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 25bdd719690d..a33f1b1709d0 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -134,6 +134,13 @@ struct io_kiocb {
 struct io_submit_state {
 	struct blk_plug plug;
 
+	/*
+	 * io_kiocb alloc cache
+	 */
+	void *reqs[IO_IOPOLL_BATCH];
+	unsigned int free_reqs;
+	unsigned int cur_req;
+
 	/*
 	 * File reference cache
 	 */
@@ -257,20 +264,42 @@ static void io_ring_drop_ctx_refs(struct io_ring_ctx *ctx, unsigned refs)
 		wake_up(&ctx->wait);
 }
 
-static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx)
+static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx,
+				   struct io_submit_state *state)
 {
+	gfp_t gfp = GFP_ATOMIC | __GFP_NOWARN;
 	struct io_kiocb *req;
 
 	/* safe to use the non tryget, as we're inside ring ref already */
 	percpu_ref_get(&ctx->refs);
 
-	req = kmem_cache_alloc(req_cachep, GFP_ATOMIC | __GFP_NOWARN);
+	if (!state)
+		req = kmem_cache_alloc(req_cachep, gfp);
+	else if (!state->free_reqs) {
+		size_t sz;
+		int ret;
+
+		sz = min_t(size_t, state->ios_left, ARRAY_SIZE(state->reqs));
+		ret = kmem_cache_alloc_bulk(req_cachep, gfp, sz,
+						state->reqs);
+		if (ret <= 0)
+			goto out;
+		state->free_reqs = ret - 1;
+		state->cur_req = 1;
+		req = state->reqs[0];
+	} else {
+		req = state->reqs[state->cur_req];
+		state->free_reqs--;
+		state->cur_req++;
+	}
+
 	if (req) {
 		req->ctx = ctx;
 		req->flags = 0;
 		return req;
 	}
 
+out:
 	io_ring_drop_ctx_refs(ctx, 1);
 	return NULL;
 }
@@ -883,7 +912,7 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct sqe_submit *s,
 	if (unlikely(s->sqe->flags))
 		return -EINVAL;
 
-	req = io_get_req(ctx);
+	req = io_get_req(ctx, state);
 	if (unlikely(!req))
 		return -EAGAIN;
 
@@ -907,6 +936,9 @@ static void io_submit_state_end(struct io_submit_state *state)
 {
 	blk_finish_plug(&state->plug);
 	io_file_put(state, NULL);
+	if (state->free_reqs)
+		kmem_cache_free_bulk(req_cachep, state->free_reqs,
+					&state->reqs[state->cur_req]);
 }
 
 /*
@@ -916,6 +948,7 @@ static void io_submit_state_start(struct io_submit_state *state,
 				  struct io_ring_ctx *ctx, unsigned max_ios)
 {
 	blk_start_plug(&state->plug);
+	state->free_reqs = 0;
 	state->file = NULL;
 	state->ios_left = max_ios;
 }
-- 
2.17.1

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 11/18] block: implement bio helper to add iter bvec pages to bio
  2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
                   ` (9 preceding siblings ...)
  2019-01-28 21:35 ` [PATCH 10/18] io_uring: batch io_kiocb allocation Jens Axboe
@ 2019-01-28 21:35 ` Jens Axboe
  2019-01-28 21:35 ` [PATCH 12/18] io_uring: add support for pre-mapped user IO buffers Jens Axboe
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi, Jens Axboe

For an ITER_BVEC, we can just iterate the iov and add the pages
to the bio directly. This requires that the caller doesn't releases
the pages on IO completion, we add a BIO_NO_PAGE_REF flag for that.

The current two callers of bio_iov_iter_get_pages() are updated to
check if they need to release pages on completion. This makes them
work with bvecs that contain kernel mapped pages already.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 block/bio.c               | 59 ++++++++++++++++++++++++++++++++-------
 fs/block_dev.c            |  5 ++--
 fs/iomap.c                |  5 ++--
 include/linux/blk_types.h |  1 +
 4 files changed, 56 insertions(+), 14 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 4db1008309ed..330df572cfb8 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -828,6 +828,23 @@ int bio_add_page(struct bio *bio, struct page *page,
 }
 EXPORT_SYMBOL(bio_add_page);
 
+static int __bio_iov_bvec_add_pages(struct bio *bio, struct iov_iter *iter)
+{
+	const struct bio_vec *bv = iter->bvec;
+	unsigned int len;
+	size_t size;
+
+	len = min_t(size_t, bv->bv_len, iter->count);
+	size = bio_add_page(bio, bv->bv_page, len,
+				bv->bv_offset + iter->iov_offset);
+	if (size == len) {
+		iov_iter_advance(iter, size);
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
 #define PAGE_PTRS_PER_BVEC     (sizeof(struct bio_vec) / sizeof(struct page *))
 
 /**
@@ -876,23 +893,43 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
 }
 
 /**
- * bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio
+ * bio_iov_iter_get_pages - add user or kernel pages to a bio
  * @bio: bio to add pages to
- * @iter: iov iterator describing the region to be mapped
+ * @iter: iov iterator describing the region to be added
+ *
+ * This takes either an iterator pointing to user memory, or one pointing to
+ * kernel pages (BVEC iterator). If we're adding user pages, we pin them and
+ * map them into the kernel. On IO completion, the caller should put those
+ * pages. If we're adding kernel pages, we just have to add the pages to the
+ * bio directly. We don't grab an extra reference to those pages (the user
+ * should already have that), and we don't put the page on IO completion.
+ * The caller needs to check if the bio is flagged BIO_NO_PAGE_REF on IO
+ * completion. If it isn't, then pages should be released.
  *
- * Pins pages from *iter and appends them to @bio's bvec array. The
- * pages will have to be released using put_page() when done.
  * The function tries, but does not guarantee, to pin as many pages as
- * fit into the bio, or are requested in *iter, whatever is smaller.
- * If MM encounters an error pinning the requested pages, it stops.
- * Error is returned only if 0 pages could be pinned.
+ * fit into the bio, or are requested in *iter, whatever is smaller. If
+ * MM encounters an error pinning the requested pages, it stops. Error
+ * is returned only if 0 pages could be pinned.
  */
 int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
 {
+	const bool is_bvec = iov_iter_is_bvec(iter);
 	unsigned short orig_vcnt = bio->bi_vcnt;
 
+	/*
+	 * If this is a BVEC iter, then the pages are kernel pages. Don't
+	 * release them on IO completion.
+	 */
+	if (is_bvec)
+		bio_set_flag(bio, BIO_NO_PAGE_REF);
+
 	do {
-		int ret = __bio_iov_iter_get_pages(bio, iter);
+		int ret;
+
+		if (is_bvec)
+			ret = __bio_iov_bvec_add_pages(bio, iter);
+		else
+			ret = __bio_iov_iter_get_pages(bio, iter);
 
 		if (unlikely(ret))
 			return bio->bi_vcnt > orig_vcnt ? 0 : ret;
@@ -1634,7 +1671,8 @@ static void bio_dirty_fn(struct work_struct *work)
 		next = bio->bi_private;
 
 		bio_set_pages_dirty(bio);
-		bio_release_pages(bio);
+		if (!bio_flagged(bio, BIO_NO_PAGE_REF))
+			bio_release_pages(bio);
 		bio_put(bio);
 	}
 }
@@ -1650,7 +1688,8 @@ void bio_check_pages_dirty(struct bio *bio)
 			goto defer;
 	}
 
-	bio_release_pages(bio);
+	if (!bio_flagged(bio, BIO_NO_PAGE_REF))
+		bio_release_pages(bio);
 	bio_put(bio);
 	return;
 defer:
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 392e2bfb636f..051ab41d1c61 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -338,8 +338,9 @@ static void blkdev_bio_end_io(struct bio *bio)
 		struct bio_vec *bvec;
 		int i;
 
-		bio_for_each_segment_all(bvec, bio, i)
-			put_page(bvec->bv_page);
+		if (!bio_flagged(bio, BIO_NO_PAGE_REF))
+			bio_for_each_segment_all(bvec, bio, i)
+				put_page(bvec->bv_page);
 		bio_put(bio);
 	}
 }
diff --git a/fs/iomap.c b/fs/iomap.c
index 4ee50b76b4a1..e5c48a0b20e0 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -1582,8 +1582,9 @@ static void iomap_dio_bio_end_io(struct bio *bio)
 		struct bio_vec *bvec;
 		int i;
 
-		bio_for_each_segment_all(bvec, bio, i)
-			put_page(bvec->bv_page);
+		if (!bio_flagged(bio, BIO_NO_PAGE_REF))
+			bio_for_each_segment_all(bvec, bio, i)
+				put_page(bvec->bv_page);
 		bio_put(bio);
 	}
 }
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index d66bf5f32610..791fee35df88 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -215,6 +215,7 @@ struct bio {
 /*
  * bio flags
  */
+#define BIO_NO_PAGE_REF	0	/* don't put release vec pages */
 #define BIO_SEG_VALID	1	/* bi_phys_segments valid */
 #define BIO_CLONED	2	/* doesn't own data */
 #define BIO_BOUNCED	3	/* bio is a bounce bio */
-- 
2.17.1

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 12/18] io_uring: add support for pre-mapped user IO buffers
  2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
                   ` (10 preceding siblings ...)
  2019-01-28 21:35 ` [PATCH 11/18] block: implement bio helper to add iter bvec pages to bio Jens Axboe
@ 2019-01-28 21:35 ` Jens Axboe
  2019-01-28 23:35   ` Jann Horn
  2019-01-28 21:35 ` [PATCH 13/18] io_uring: add file set registration Jens Axboe
                   ` (5 subsequent siblings)
  17 siblings, 1 reply; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi, Jens Axboe

If we have fixed user buffers, we can map them into the kernel when we
setup the io_context. That avoids the need to do get_user_pages() for
each and every IO.

To utilize this feature, the application must call io_uring_register()
after having setup an io_uring context, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer
to an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.

If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.

The application may register buffers throughout the lifetime of the
io_uring context. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring context.

It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.

For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.

RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 arch/x86/entry/syscalls/syscall_32.tbl |   1 +
 arch/x86/entry/syscalls/syscall_64.tbl |   1 +
 fs/io_uring.c                          | 356 ++++++++++++++++++++++++-
 include/linux/sched/user.h             |   2 +-
 include/linux/syscalls.h               |   2 +
 include/uapi/asm-generic/unistd.h      |   4 +-
 include/uapi/linux/io_uring.h          |  13 +-
 kernel/sys_ni.c                        |   1 +
 8 files changed, 366 insertions(+), 14 deletions(-)

diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index 481c126259e9..2eefd2a7c1ce 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -400,3 +400,4 @@
 386	i386	rseq			sys_rseq			__ia32_sys_rseq
 425	i386	io_uring_setup		sys_io_uring_setup		__ia32_sys_io_uring_setup
 426	i386	io_uring_enter		sys_io_uring_enter		__ia32_sys_io_uring_enter
+427	i386	io_uring_register	sys_io_uring_register		__ia32_sys_io_uring_register
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index 6a32a430c8e0..65c026185e61 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -345,6 +345,7 @@
 334	common	rseq			__x64_sys_rseq
 425	common	io_uring_setup		__x64_sys_io_uring_setup
 426	common	io_uring_enter		__x64_sys_io_uring_enter
+427	common	io_uring_register	__x64_sys_io_uring_register
 
 #
 # x32-specific system call numbers start at 512 to avoid cache impact
diff --git a/fs/io_uring.c b/fs/io_uring.c
index a33f1b1709d0..682714d6f217 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -25,8 +25,10 @@
 #include <linux/slab.h>
 #include <linux/workqueue.h>
 #include <linux/blkdev.h>
+#include <linux/bvec.h>
 #include <linux/anon_inodes.h>
 #include <linux/sched/mm.h>
+#include <linux/sizes.h>
 
 #include <linux/uaccess.h>
 #include <linux/nospec.h>
@@ -57,6 +59,13 @@ struct io_cq_ring {
 	struct io_uring_cqe	cqes[];
 };
 
+struct io_mapped_ubuf {
+	u64		ubuf;
+	size_t		len;
+	struct		bio_vec *bvec;
+	unsigned int	nr_bvecs;
+};
+
 struct io_ring_ctx {
 	struct {
 		struct percpu_ref	refs;
@@ -89,6 +98,10 @@ struct io_ring_ctx {
 		struct fasync_struct	*cq_fasync;
 	} ____cacheline_aligned_in_smp;
 
+	/* if used, fixed mapped user buffers */
+	unsigned		nr_user_bufs;
+	struct io_mapped_ubuf	*user_bufs;
+
 	struct user_struct	*user;
 
 	struct completion	ctx_done;
@@ -663,12 +676,51 @@ static inline void io_rw_done(struct kiocb *kiocb, ssize_t ret)
 	}
 }
 
+static int io_import_fixed(struct io_ring_ctx *ctx, int rw,
+			   const struct io_uring_sqe *sqe,
+			   struct iov_iter *iter)
+{
+	struct io_mapped_ubuf *imu;
+	size_t len = sqe->len;
+	size_t offset;
+	int index;
+
+	/* attempt to use fixed buffers without having provided iovecs */
+	if (unlikely(!ctx->user_bufs))
+		return -EFAULT;
+	if (unlikely(sqe->buf_index >= ctx->nr_user_bufs))
+		return -EFAULT;
+
+	index = array_index_nospec(sqe->buf_index, ctx->sq_entries);
+	imu = &ctx->user_bufs[index];
+	if ((unsigned long) sqe->addr < imu->ubuf ||
+	    (unsigned long) sqe->addr + len > imu->ubuf + imu->len)
+		return -EFAULT;
+
+	/*
+	 * May not be a start of buffer, set size appropriately
+	 * and advance us to the beginning.
+	 */
+	offset = (unsigned long) sqe->addr - imu->ubuf;
+	iov_iter_bvec(iter, rw, imu->bvec, imu->nr_bvecs, offset + len);
+	if (offset)
+		iov_iter_advance(iter, offset);
+	return 0;
+}
+
 static int io_import_iovec(struct io_ring_ctx *ctx, int rw,
 			   const struct io_uring_sqe *sqe,
 			   struct iovec **iovec, struct iov_iter *iter)
 {
 	void __user *buf = u64_to_user_ptr(sqe->addr);
 
+	if (sqe->opcode == IORING_OP_READ_FIXED ||
+	    sqe->opcode == IORING_OP_WRITE_FIXED) {
+		ssize_t ret = io_import_fixed(ctx, rw, sqe, iter);
+		*iovec = NULL;
+		return ret;
+	}
+
 #ifdef CONFIG_COMPAT
 	if (in_compat_syscall())
 		return compat_import_iovec(rw, buf, sqe->len, UIO_FASTIOV,
@@ -805,7 +857,7 @@ static int io_fsync(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 
 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
 		return -EINVAL;
-	if (unlikely(sqe->addr || sqe->ioprio))
+	if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index))
 		return -EINVAL;
 	if (unlikely(sqe->fsync_flags & ~IORING_FSYNC_DATASYNC))
 		return -EINVAL;
@@ -840,9 +892,19 @@ static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		ret = io_nop(req, sqe);
 		break;
 	case IORING_OP_READV:
+		if (unlikely(sqe->buf_index))
+			return -EINVAL;
 		ret = io_read(req, sqe, force_nonblock, state);
 		break;
 	case IORING_OP_WRITEV:
+		if (unlikely(sqe->buf_index))
+			return -EINVAL;
+		ret = io_write(req, sqe, force_nonblock, state);
+		break;
+	case IORING_OP_READ_FIXED:
+		ret = io_read(req, sqe, force_nonblock, state);
+		break;
+	case IORING_OP_WRITE_FIXED:
 		ret = io_write(req, sqe, force_nonblock, state);
 		break;
 	case IORING_OP_FSYNC:
@@ -865,14 +927,21 @@ static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	return 0;
 }
 
+static inline bool io_sqe_needs_user(const struct io_uring_sqe *sqe)
+{
+	return !(sqe->opcode == IORING_OP_READ_FIXED ||
+		 sqe->opcode == IORING_OP_WRITE_FIXED);
+}
+
 static void io_sq_wq_submit_work(struct work_struct *work)
 {
 	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
 	struct sqe_submit *s = &req->submit;
 	u64 user_data = s->sqe->user_data;
 	struct io_ring_ctx *ctx = req->ctx;
-	mm_segment_t old_fs = get_fs();
 	struct files_struct *old_files;
+	mm_segment_t old_fs;
+	bool needs_user;
 	int ret;
 
 	 /* Ensure we clear previously set forced non-block flag */
@@ -881,19 +950,28 @@ static void io_sq_wq_submit_work(struct work_struct *work)
 	old_files = current->files;
 	current->files = ctx->sqo_files;
 
-	if (!mmget_not_zero(ctx->sqo_mm)) {
-		ret = -EFAULT;
-		goto err;
+	/*
+	 * If we're doing IO to fixed buffers, we don't need to get/set
+	 * user context
+	 */
+	needs_user = io_sqe_needs_user(s->sqe);
+	if (needs_user) {
+		if (!mmget_not_zero(ctx->sqo_mm)) {
+			ret = -EFAULT;
+			goto err;
+		}
+		use_mm(ctx->sqo_mm);
+		old_fs = get_fs();
+		set_fs(USER_DS);
 	}
 
-	use_mm(ctx->sqo_mm);
-	set_fs(USER_DS);
-
 	ret = __io_submit_sqe(ctx, req, s, false, NULL);
 
-	set_fs(old_fs);
-	unuse_mm(ctx->sqo_mm);
-	mmput(ctx->sqo_mm);
+	if (needs_user) {
+		set_fs(old_fs);
+		unuse_mm(ctx->sqo_mm);
+		mmput(ctx->sqo_mm);
+	}
 err:
 	if (ret) {
 		io_cqring_add_event(ctx, user_data, ret, 0);
@@ -1194,6 +1272,14 @@ static void *io_mem_alloc(size_t size)
 	return (void *) __get_free_pages(gfp_flags, get_order(size));
 }
 
+static int io_account_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
+{
+	if (ctx->user)
+		return __io_account_mem(ctx->user, nr_pages);
+
+	return 0;
+}
+
 static unsigned long ring_pages(unsigned sq_entries, unsigned cq_entries)
 {
 	struct io_sq_ring *sq_ring;
@@ -1207,10 +1293,195 @@ static unsigned long ring_pages(unsigned sq_entries, unsigned cq_entries)
 	return (bytes + PAGE_SIZE - 1) / PAGE_SIZE;
 }
 
+static int io_sqe_buffer_unregister(struct io_ring_ctx *ctx)
+{
+	int i, j;
+
+	if (!ctx->user_bufs)
+		return -ENXIO;
+
+	for (i = 0; i < ctx->sq_entries; i++) {
+		struct io_mapped_ubuf *imu = &ctx->user_bufs[i];
+
+		for (j = 0; j < imu->nr_bvecs; j++)
+			put_page(imu->bvec[j].bv_page);
+
+		io_unaccount_mem(ctx, imu->nr_bvecs);
+		kfree(imu->bvec);
+		imu->nr_bvecs = 0;
+	}
+
+	kfree(ctx->user_bufs);
+	ctx->user_bufs = NULL;
+	free_uid(ctx->user);
+	ctx->user = NULL;
+	return 0;
+}
+
+static int io_copy_iov(struct io_ring_ctx *ctx, struct iovec *dst,
+		       void __user *arg, unsigned index)
+{
+	struct iovec __user *src;
+
+#ifdef CONFIG_COMPAT
+	if (in_compat_syscall()) {
+		struct compat_iovec __user *ciovs;
+		struct compat_iovec ciov;
+
+		ciovs = (struct compat_iovec __user *) arg;
+		if (copy_from_user(&ciov, &ciovs[index], sizeof(ciov)))
+			return -EFAULT;
+
+		dst->iov_base = (void __user *) (unsigned long) ciov.iov_base;
+		dst->iov_len = ciov.iov_len;
+		return 0;
+	}
+#endif
+	src = (struct iovec __user *) arg;
+	if (copy_from_user(dst, &src[index], sizeof(*dst)))
+		return -EFAULT;
+	return 0;
+}
+
+static int io_sqe_buffer_register(struct io_ring_ctx *ctx, void __user *arg,
+				  unsigned nr_args)
+{
+	struct vm_area_struct **vmas = NULL;
+	struct page **pages = NULL;
+	int i, j, got_pages = 0;
+	int ret = -EINVAL;
+
+	if (ctx->user_bufs)
+		return -EBUSY;
+	if (!nr_args || nr_args > UIO_MAXIOV)
+		return -EINVAL;
+
+	ctx->user_bufs = kcalloc(nr_args, sizeof(struct io_mapped_ubuf),
+					GFP_KERNEL);
+	if (!ctx->user_bufs)
+		return -ENOMEM;
+
+	if (!capable(CAP_IPC_LOCK))
+		ctx->user = get_uid(current_user());
+
+	for (i = 0; i < nr_args; i++) {
+		struct io_mapped_ubuf *imu = &ctx->user_bufs[i];
+		unsigned long off, start, end, ubuf;
+		int pret, nr_pages;
+		struct iovec iov;
+		size_t size;
+
+		ret = io_copy_iov(ctx, &iov, arg, i);
+		if (ret)
+			break;
+
+		/*
+		 * Don't impose further limits on the size and buffer
+		 * constraints here, we'll -EINVAL later when IO is
+		 * submitted if they are wrong.
+		 */
+		ret = -EFAULT;
+		if (!iov.iov_base)
+			goto err;
+
+		/* arbitrary limit, but we need something */
+		if (iov.iov_len > SZ_1G)
+			goto err;
+
+		ubuf = (unsigned long) iov.iov_base;
+		end = (ubuf + iov.iov_len + PAGE_SIZE - 1) >> PAGE_SHIFT;
+		start = ubuf >> PAGE_SHIFT;
+		nr_pages = end - start;
+
+		ret = io_account_mem(ctx, nr_pages);
+		if (ret)
+			goto err;
+
+		if (!pages || nr_pages > got_pages) {
+			kfree(vmas);
+			kfree(pages);
+			pages = kmalloc_array(nr_pages, sizeof(struct page *),
+						GFP_KERNEL);
+			vmas = kmalloc_array(nr_pages,
+					sizeof(struct vma_area_struct *),
+					GFP_KERNEL);
+			if (!pages || !vmas) {
+				io_unaccount_mem(ctx, nr_pages);
+				goto err;
+			}
+			got_pages = nr_pages;
+		}
+
+		imu->bvec = kmalloc_array(nr_pages, sizeof(struct bio_vec),
+						GFP_KERNEL);
+		if (!imu->bvec) {
+			io_unaccount_mem(ctx, nr_pages);
+			goto err;
+		}
+
+		down_write(&current->mm->mmap_sem);
+		pret = get_user_pages_longterm(ubuf, nr_pages, FOLL_WRITE,
+						pages, vmas);
+		if (pret == nr_pages) {
+			/* don't support file backed memory */
+			for (j = 0; j < nr_pages; j++) {
+				struct vm_area_struct *vma = vmas[j];
+
+				if (vma->vm_file) {
+					ret = -EOPNOTSUPP;
+					break;
+				}
+			}
+		} else {
+			ret = pret < 0 ? pret : -EFAULT;
+		}
+		up_write(&current->mm->mmap_sem);
+		if (ret) {
+			/*
+			 * if we did partial map, or found file backed vmas,
+			 * release any pages we did get
+			 */
+			if (pret > 0) {
+				for (j = 0; j < pret; j++)
+					put_page(pages[j]);
+			}
+			io_unaccount_mem(ctx, nr_pages);
+			goto err;
+		}
+
+		off = ubuf & ~PAGE_MASK;
+		size = iov.iov_len;
+		for (j = 0; j < nr_pages; j++) {
+			size_t vec_len;
+
+			vec_len = min_t(size_t, size, PAGE_SIZE - off);
+			imu->bvec[j].bv_page = pages[j];
+			imu->bvec[j].bv_len = vec_len;
+			imu->bvec[j].bv_offset = off;
+			off = 0;
+			size -= vec_len;
+		}
+		/* store original address for later verification */
+		imu->ubuf = ubuf;
+		imu->len = iov.iov_len;
+		imu->nr_bvecs = nr_pages;
+	}
+	kfree(pages);
+	kfree(vmas);
+	ctx->nr_user_bufs = nr_args;
+	return 0;
+err:
+	kfree(pages);
+	kfree(vmas);
+	io_sqe_buffer_unregister(ctx);
+	return ret;
+}
+
 static void io_ring_ctx_free(struct io_ring_ctx *ctx)
 {
 	destroy_workqueue(ctx->sqo_wq);
 	io_iopoll_reap_events(ctx);
+	io_sqe_buffer_unregister(ctx);
 
 	io_mem_free(ctx->sq_ring);
 	io_mem_free(ctx->sq_sqes);
@@ -1486,6 +1757,69 @@ SYSCALL_DEFINE2(io_uring_setup, u32, entries,
 	return io_uring_setup(entries, params);
 }
 
+static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
+			       void __user *arg, unsigned nr_args)
+{
+	int ret;
+
+	/* Drop our initial ref and wait for the ctx to be fully idle */
+	percpu_ref_put(&ctx->refs);
+	percpu_ref_kill(&ctx->refs);
+	wait_for_completion(&ctx->ctx_done);
+
+	switch (opcode) {
+	case IORING_REGISTER_BUFFERS:
+		ret = io_sqe_buffer_register(ctx, arg, nr_args);
+		break;
+	case IORING_UNREGISTER_BUFFERS:
+		ret = -EINVAL;
+		if (arg || nr_args)
+			break;
+		ret = io_sqe_buffer_unregister(ctx);
+		break;
+	default:
+		ret = -EINVAL;
+		break;
+	}
+
+	/* bring the ctx back to life */
+	reinit_completion(&ctx->ctx_done);
+	percpu_ref_resurrect(&ctx->refs);
+	percpu_ref_get(&ctx->refs);
+	return ret;
+}
+
+SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
+		void __user *, arg, unsigned int, nr_args)
+{
+	struct io_ring_ctx *ctx;
+	long ret = -EBADF;
+	struct fd f;
+
+	f = fdget(fd);
+	if (!f.file)
+		return -EBADF;
+
+	ret = -EOPNOTSUPP;
+	if (f.file->f_op != &io_uring_fops)
+		goto out_fput;
+
+	ret = -ENXIO;
+	ctx = f.file->private_data;
+	if (!percpu_ref_tryget(&ctx->refs))
+		goto out_fput;
+
+	ret = -EBUSY;
+	if (mutex_trylock(&ctx->uring_lock)) {
+		ret = __io_uring_register(ctx, opcode, arg, nr_args);
+		mutex_unlock(&ctx->uring_lock);
+	}
+	io_ring_drop_ctx_refs(ctx, 1);
+out_fput:
+	fdput(f);
+	return ret;
+}
+
 static int __init io_uring_init(void)
 {
 	req_cachep = KMEM_CACHE(io_kiocb, SLAB_HWCACHE_ALIGN | SLAB_PANIC);
diff --git a/include/linux/sched/user.h b/include/linux/sched/user.h
index 39ad98c09c58..c7b5f86b91a1 100644
--- a/include/linux/sched/user.h
+++ b/include/linux/sched/user.h
@@ -40,7 +40,7 @@ struct user_struct {
 	kuid_t uid;
 
 #if defined(CONFIG_PERF_EVENTS) || defined(CONFIG_BPF_SYSCALL) || \
-    defined(CONFIG_NET)
+    defined(CONFIG_NET) || defined(CONFIG_IO_URING)
 	atomic_long_t locked_vm;
 #endif
 
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 3072dbaa7869..3681c05ac538 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -315,6 +315,8 @@ asmlinkage long sys_io_uring_setup(u32 entries,
 asmlinkage long sys_io_uring_enter(unsigned int fd, u32 to_submit,
 				u32 min_complete, u32 flags,
 				const sigset_t __user *sig, size_t sigsz);
+asmlinkage long sys_io_uring_register(unsigned int fd, unsigned int op,
+				void __user *arg, unsigned int nr_args);
 
 /* fs/xattr.c */
 asmlinkage long sys_setxattr(const char __user *path, const char __user *name,
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index 87871e7b7ea7..d346229a1eb0 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -744,9 +744,11 @@ __SYSCALL(__NR_kexec_file_load,     sys_kexec_file_load)
 __SYSCALL(__NR_io_uring_setup, sys_io_uring_setup)
 #define __NR_io_uring_enter 426
 __SYSCALL(__NR_io_uring_enter, sys_io_uring_enter)
+#define __NR_io_uring_register 427
+__SYSCALL(__NR_io_uring_register, sys_io_uring_register)
 
 #undef __NR_syscalls
-#define __NR_syscalls 427
+#define __NR_syscalls 428
 
 /*
  * 32 bit systems traditionally used different
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 4fc5fbd07688..03ce7133c3b2 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -29,7 +29,10 @@ struct io_uring_sqe {
 		__u32		fsync_flags;
 	};
 	__u64	user_data;	/* data to be passed back at completion time */
-	__u64	__pad2[3];
+	union {
+		__u16	buf_index;	/* index into fixed buffers, if used */
+		__u64	__pad2[3];
+	};
 };
 
 /*
@@ -41,6 +44,8 @@ struct io_uring_sqe {
 #define IORING_OP_READV		1
 #define IORING_OP_WRITEV	2
 #define IORING_OP_FSYNC		3
+#define IORING_OP_READ_FIXED	4
+#define IORING_OP_WRITE_FIXED	5
 
 /*
  * sqe->fsync_flags
@@ -104,4 +109,10 @@ struct io_uring_params {
 	struct io_cqring_offsets cq_off;
 };
 
+/*
+ * io_uring_register(2) opcodes and arguments
+ */
+#define IORING_REGISTER_BUFFERS		0
+#define IORING_UNREGISTER_BUFFERS	1
+
 #endif
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index ee5e523564bb..1bb6604dc19f 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -48,6 +48,7 @@ COND_SYSCALL_COMPAT(io_getevents);
 COND_SYSCALL_COMPAT(io_pgetevents);
 COND_SYSCALL(io_uring_setup);
 COND_SYSCALL(io_uring_enter);
+COND_SYSCALL(io_uring_register);
 
 /* fs/xattr.c */
 
-- 
2.17.1

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 13/18] io_uring: add file set registration
  2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
                   ` (11 preceding siblings ...)
  2019-01-28 21:35 ` [PATCH 12/18] io_uring: add support for pre-mapped user IO buffers Jens Axboe
@ 2019-01-28 21:35 ` Jens Axboe
  2019-01-28 21:35 ` [PATCH 14/18] io_uring: add submission polling Jens Axboe
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi, Jens Axboe

We normally have to fget/fput for each IO we do on a file. Even with
the batching we do, the cost of the atomic inc/dec of the file usage
count adds up.

This adds IORING_REGISTER_FILES, and IORING_UNREGISTER_FILES opcodes
for the io_uring_register(2) system call. The arguments passed in must
be an array of __s32 holding file descriptors, and nr_args should hold
the number of file descriptors the application wishes to pin for the
duration of the io_uring context (or until IORING_UNREGISTER_FILES is
called).

When used, the application must set IOSQE_FIXED_FILE in the sqe->flags
member. Then, instead of setting sqe->fd to the real fd, it sets sqe->fd
to the index in the array passed in to IORING_REGISTER_FILES.

Files are automatically unregistered when the io_uring context is
torn down. An application need only unregister if it wishes to
register a few set of fds.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c                 | 125 +++++++++++++++++++++++++++++-----
 include/uapi/linux/io_uring.h |   9 ++-
 2 files changed, 116 insertions(+), 18 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 682714d6f217..77993972879b 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -98,6 +98,10 @@ struct io_ring_ctx {
 		struct fasync_struct	*cq_fasync;
 	} ____cacheline_aligned_in_smp;
 
+	/* if used, fixed file set */
+	struct file		**user_files;
+	unsigned		nr_user_files;
+
 	/* if used, fixed mapped user buffers */
 	unsigned		nr_user_bufs;
 	struct io_mapped_ubuf	*user_bufs;
@@ -135,6 +139,7 @@ struct io_kiocb {
 #define REQ_F_FORCE_NONBLOCK	1	/* inline submission attempt */
 #define REQ_F_IOPOLL_COMPLETED	2	/* polled IO has completed */
 #define REQ_F_IOPOLL_EAGAIN	4	/* submission got EAGAIN */
+#define REQ_F_FIXED_FILE	8	/* ctx owns file */
 	u64			user_data;
 	u64			res;
 
@@ -357,15 +362,17 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
 		 * Batched puts of the same file, to avoid dirtying the
 		 * file usage count multiple times, if avoidable.
 		 */
-		if (!file) {
-			file = req->rw.ki_filp;
-			file_count = 1;
-		} else if (file == req->rw.ki_filp) {
-			file_count++;
-		} else {
-			fput_many(file, file_count);
-			file = req->rw.ki_filp;
-			file_count = 1;
+		if (!(req->flags & REQ_F_FIXED_FILE)) {
+			if (!file) {
+				file = req->rw.ki_filp;
+				file_count = 1;
+			} else if (file == req->rw.ki_filp) {
+				file_count++;
+			} else {
+				fput_many(file, file_count);
+				file = req->rw.ki_filp;
+				file_count = 1;
+			}
 		}
 
 		if (to_free == ARRAY_SIZE(reqs))
@@ -502,13 +509,19 @@ static void kiocb_end_write(struct kiocb *kiocb)
 	}
 }
 
+static void io_fput(struct io_kiocb *req)
+{
+	if (!(req->flags & REQ_F_FIXED_FILE))
+		fput(req->rw.ki_filp);
+}
+
 static void io_complete_rw(struct kiocb *kiocb, long res, long res2)
 {
 	struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw);
 
 	kiocb_end_write(kiocb);
 
-	fput(kiocb->ki_filp);
+	io_fput(req);
 	io_cqring_add_event(req->ctx, req->user_data, res, 0);
 	io_free_req(req);
 }
@@ -612,7 +625,14 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	struct kiocb *kiocb = &req->rw;
 	int ret;
 
-	kiocb->ki_filp = io_file_get(state, sqe->fd);
+	if (sqe->flags & IOSQE_FIXED_FILE) {
+		if (unlikely(!ctx->user_files || sqe->fd >= ctx->nr_user_files))
+			return -EBADF;
+		kiocb->ki_filp = ctx->user_files[sqe->fd];
+		req->flags |= REQ_F_FIXED_FILE;
+	} else {
+		kiocb->ki_filp = io_file_get(state, sqe->fd);
+	}
 	if (unlikely(!kiocb->ki_filp))
 		return -EBADF;
 	kiocb->ki_pos = sqe->off;
@@ -651,7 +671,8 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	}
 	return 0;
 out_fput:
-	io_file_put(state, kiocb->ki_filp);
+	if (!(sqe->flags & IOSQE_FIXED_FILE))
+		io_file_put(state, kiocb->ki_filp);
 	return ret;
 }
 
@@ -769,7 +790,7 @@ static ssize_t io_read(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	kfree(iovec);
 out_fput:
 	if (unlikely(ret))
-		fput(file);
+		io_fput(req);
 	return ret;
 }
 
@@ -824,7 +845,7 @@ static ssize_t io_write(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	kfree(iovec);
 out_fput:
 	if (unlikely(ret))
-		fput(file);
+		io_fput(req);
 	return ret;
 }
 
@@ -862,14 +883,23 @@ static int io_fsync(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	if (unlikely(sqe->fsync_flags & ~IORING_FSYNC_DATASYNC))
 		return -EINVAL;
 
-	file = fget(sqe->fd);
+	if (sqe->flags & IOSQE_FIXED_FILE) {
+		if (unlikely(!ctx->user_files || sqe->fd >= ctx->nr_user_files))
+			return -EBADF;
+		file = ctx->user_files[sqe->fd];
+	} else {
+		file = fget(sqe->fd);
+	}
+
 	if (unlikely(!file))
 		return -EBADF;
 
 	ret = vfs_fsync_range(file, sqe->off, end > 0 ? end : LLONG_MAX,
 			sqe->fsync_flags & IORING_FSYNC_DATASYNC);
 
-	fput(file);
+	if (!(sqe->flags & IOSQE_FIXED_FILE))
+		fput(file);
+
 	io_cqring_add_event(ctx, sqe->user_data, ret, 0);
 	io_free_req(req);
 	return 0;
@@ -987,7 +1017,7 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct sqe_submit *s,
 	ssize_t ret;
 
 	/* enforce forwards compatibility on users */
-	if (unlikely(s->sqe->flags))
+	if (unlikely(s->sqe->flags & ~IOSQE_FIXED_FILE))
 		return -EINVAL;
 
 	req = io_get_req(ctx, state);
@@ -1195,6 +1225,57 @@ static int __io_uring_enter(struct io_ring_ctx *ctx, unsigned to_submit,
 	return submitted ? submitted : ret;
 }
 
+static int io_sqe_files_unregister(struct io_ring_ctx *ctx)
+{
+	int i;
+
+	if (!ctx->user_files)
+		return -ENXIO;
+
+	for (i = 0; i < ctx->nr_user_files; i++)
+		fput(ctx->user_files[i]);
+
+	kfree(ctx->user_files);
+	ctx->user_files = NULL;
+	ctx->nr_user_files = 0;
+	return 0;
+}
+
+static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
+				 unsigned nr_args)
+{
+	__s32 __user *fds = (__s32 __user *) arg;
+	int fd, i, ret = 0;
+
+	if (ctx->user_files)
+		return -EBUSY;
+	if (!nr_args)
+		return -EINVAL;
+
+	ctx->user_files = kcalloc(nr_args, sizeof(struct file *), GFP_KERNEL);
+	if (!ctx->user_files)
+		return -ENOMEM;
+
+	for (i = 0; i < nr_args; i++) {
+		ret = -EFAULT;
+		if (copy_from_user(&fd, &fds[i], sizeof(fd)))
+			break;
+
+		ctx->user_files[i] = fget(fd);
+
+		ret = -EBADF;
+		if (!ctx->user_files[i])
+			break;
+		ctx->nr_user_files++;
+		ret = 0;
+	}
+
+	if (ret)
+		io_sqe_files_unregister(ctx);
+
+	return ret;
+}
+
 static int io_sq_offload_start(struct io_ring_ctx *ctx)
 {
 	int ret;
@@ -1482,6 +1563,7 @@ static void io_ring_ctx_free(struct io_ring_ctx *ctx)
 	destroy_workqueue(ctx->sqo_wq);
 	io_iopoll_reap_events(ctx);
 	io_sqe_buffer_unregister(ctx);
+	io_sqe_files_unregister(ctx);
 
 	io_mem_free(ctx->sq_ring);
 	io_mem_free(ctx->sq_sqes);
@@ -1777,6 +1859,15 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
 			break;
 		ret = io_sqe_buffer_unregister(ctx);
 		break;
+	case IORING_REGISTER_FILES:
+		ret = io_sqe_files_register(ctx, arg, nr_args);
+		break;
+	case IORING_UNREGISTER_FILES:
+		ret = -EINVAL;
+		if (arg || nr_args)
+			break;
+		ret = io_sqe_files_unregister(ctx);
+		break;
 	default:
 		ret = -EINVAL;
 		break;
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 03ce7133c3b2..8323320077ec 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -18,7 +18,7 @@
  */
 struct io_uring_sqe {
 	__u8	opcode;		/* type of operation for this sqe */
-	__u8	flags;		/* as of now unused */
+	__u8	flags;		/* IOSQE_ flags */
 	__u16	ioprio;		/* ioprio for the request */
 	__s32	fd;		/* file descriptor to do IO on */
 	__u64	off;		/* offset into file */
@@ -35,6 +35,11 @@ struct io_uring_sqe {
 	};
 };
 
+/*
+ * sqe->flags
+ */
+#define IOSQE_FIXED_FILE	(1 << 0)	/* use fixed fileset */
+
 /*
  * io_uring_setup() flags
  */
@@ -114,5 +119,7 @@ struct io_uring_params {
  */
 #define IORING_REGISTER_BUFFERS		0
 #define IORING_UNREGISTER_BUFFERS	1
+#define IORING_REGISTER_FILES		2
+#define IORING_UNREGISTER_FILES		3
 
 #endif
-- 
2.17.1

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 14/18] io_uring: add submission polling
  2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
                   ` (12 preceding siblings ...)
  2019-01-28 21:35 ` [PATCH 13/18] io_uring: add file set registration Jens Axboe
@ 2019-01-28 21:35 ` Jens Axboe
  2019-01-28 21:35 ` [PATCH 15/18] io_uring: add io_kiocb ref count Jens Axboe
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi, Jens Axboe

This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.

By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:

sq_ring->flags |= IORING_SQ_NEED_WAKEUP.

The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:

read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
	io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);

instead of calling it unconditionally.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c                 | 232 +++++++++++++++++++++++++++++++++-
 include/uapi/linux/io_uring.h |  12 +-
 2 files changed, 237 insertions(+), 7 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 77993972879b..34e786eb2e2d 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -24,6 +24,7 @@
 #include <linux/percpu.h>
 #include <linux/slab.h>
 #include <linux/workqueue.h>
+#include <linux/kthread.h>
 #include <linux/blkdev.h>
 #include <linux/bvec.h>
 #include <linux/anon_inodes.h>
@@ -79,14 +80,16 @@ struct io_ring_ctx {
 		unsigned		cached_sq_head;
 		unsigned		sq_entries;
 		unsigned		sq_mask;
-		unsigned		sq_thread_cpu;
+		unsigned		sq_thread_idle;
 		struct io_uring_sqe	*sq_sqes;
 	} ____cacheline_aligned_in_smp;
 
 	/* IO offload */
 	struct workqueue_struct	*sqo_wq;
+	struct task_struct	*sqo_thread;	/* if using sq thread polling */
 	struct mm_struct	*sqo_mm;
 	struct files_struct	*sqo_files;
+	wait_queue_head_t	sqo_wait;
 
 	struct {
 		/* CQ ring */
@@ -262,6 +265,8 @@ static void __io_cqring_add_event(struct io_ring_ctx *ctx, u64 ki_user_data,
 
 	if (waitqueue_active(&ctx->wait))
 		wake_up(&ctx->wait);
+	if (waitqueue_active(&ctx->sqo_wait))
+		wake_up(&ctx->sqo_wait);
 }
 
 static void io_cqring_add_event(struct io_ring_ctx *ctx, u64 ki_user_data,
@@ -1113,6 +1118,168 @@ static bool io_get_sqring(struct io_ring_ctx *ctx, struct sqe_submit *s)
 	return false;
 }
 
+static int io_submit_sqes(struct io_ring_ctx *ctx, struct sqe_submit *sqes,
+			  unsigned int nr, bool mm_fault)
+{
+	struct io_submit_state state, *statep = NULL;
+	int ret, i, submitted = 0;
+
+	if (nr > IO_PLUG_THRESHOLD) {
+		io_submit_state_start(&state, ctx, nr);
+		statep = &state;
+	}
+
+	for (i = 0; i < nr; i++) {
+		if (unlikely(mm_fault))
+			ret = -EFAULT;
+		else
+			ret = io_submit_sqe(ctx, &sqes[i], statep);
+		if (!ret) {
+			submitted++;
+			continue;
+		}
+
+		io_cqring_add_event(ctx, sqes[i].sqe->user_data, ret, 0);
+	}
+
+	if (statep)
+		io_submit_state_end(&state);
+
+	return submitted;
+}
+
+static int io_sq_thread(void *data)
+{
+	struct sqe_submit sqes[IO_IOPOLL_BATCH];
+	struct io_ring_ctx *ctx = data;
+	struct mm_struct *cur_mm = NULL;
+	struct files_struct *old_files;
+	mm_segment_t old_fs;
+	DEFINE_WAIT(wait);
+	unsigned inflight;
+	unsigned long timeout;
+
+	old_files = current->files;
+	current->files = ctx->sqo_files;
+
+	old_fs = get_fs();
+	set_fs(USER_DS);
+
+	timeout = inflight = 0;
+	while (!kthread_should_stop()) {
+		bool all_fixed, mm_fault = false;
+		int i;
+
+		if (inflight) {
+			unsigned nr_events = 0;
+
+			if (ctx->flags & IORING_SETUP_IOPOLL) {
+				/*
+				 * We disallow the app entering submit/complete
+				 * with polling, so no need to lock the ring.
+				 */
+				io_iopoll_check(ctx, &nr_events, 0);
+			} else {
+				/*
+				 * Normal IO, just pretend everything completed.
+				 * We don't have to poll completions for that.
+				 */
+				nr_events = inflight;
+			}
+
+			inflight -= nr_events;
+			if (!inflight)
+				timeout = jiffies + ctx->sq_thread_idle;
+		}
+
+		if (!io_get_sqring(ctx, &sqes[0])) {
+			/*
+			 * We're polling. If we're within the defined idle
+			 * period, then let us spin without work before going
+			 * to sleep.
+			 */
+			if (inflight || !time_after(jiffies, timeout)) {
+				cpu_relax();
+				continue;
+			}
+
+			/*
+			 * Drop cur_mm before scheduling, we can't hold it for
+			 * long periods (or over schedule()). Do this before
+			 * adding ourselves to the waitqueue, as the unuse/drop
+			 * may sleep.
+			 */
+			if (cur_mm) {
+				unuse_mm(cur_mm);
+				mmput(cur_mm);
+				cur_mm = NULL;
+			}
+
+			prepare_to_wait(&ctx->sqo_wait, &wait,
+						TASK_INTERRUPTIBLE);
+
+			/* Tell userspace we may need a wakeup call */
+			ctx->sq_ring->flags |= IORING_SQ_NEED_WAKEUP;
+			smp_wmb();
+
+			if (!io_get_sqring(ctx, &sqes[0])) {
+				if (kthread_should_park())
+					kthread_parkme();
+				if (kthread_should_stop()) {
+					finish_wait(&ctx->sqo_wait, &wait);
+					break;
+				}
+				if (signal_pending(current))
+					flush_signals(current);
+				schedule();
+				finish_wait(&ctx->sqo_wait, &wait);
+
+				ctx->sq_ring->flags &= ~IORING_SQ_NEED_WAKEUP;
+				smp_wmb();
+				continue;
+			}
+			finish_wait(&ctx->sqo_wait, &wait);
+
+			ctx->sq_ring->flags &= ~IORING_SQ_NEED_WAKEUP;
+			smp_wmb();
+		}
+
+		i = 0;
+		all_fixed = true;
+		do {
+			if (all_fixed && io_sqe_needs_user(sqes[i].sqe))
+				all_fixed = false;
+
+			i++;
+			if (i == ARRAY_SIZE(sqes))
+				break;
+		} while (io_get_sqring(ctx, &sqes[i]));
+
+		io_commit_sqring(ctx);
+
+		/* Unless all new commands are FIXED regions, grab mm */
+		if (!all_fixed && !cur_mm) {
+			mm_fault = !mmget_not_zero(ctx->sqo_mm);
+			if (!mm_fault) {
+				use_mm(ctx->sqo_mm);
+				cur_mm = ctx->sqo_mm;
+			}
+		}
+
+		inflight += io_submit_sqes(ctx, sqes, i, mm_fault);
+	}
+
+	io_iopoll_reap_events(ctx);
+
+	current->files = old_files;
+	set_fs(old_fs);
+	if (cur_mm) {
+		unuse_mm(cur_mm);
+		mmput(cur_mm);
+	}
+	return 0;
+}
+
 static int io_ring_submit(struct io_ring_ctx *ctx, unsigned int to_submit)
 {
 	struct io_submit_state state, *statep = NULL;
@@ -1198,6 +1365,17 @@ static int __io_uring_enter(struct io_ring_ctx *ctx, unsigned to_submit,
 {
 	int submitted, ret;
 
+	/*
+	 * For SQ polling, the thread will do all submissions and completions.
+	 * Just return the requested submit count, and wake the thread if
+	 * we were asked to.
+	 */
+	if (ctx->flags & IORING_SETUP_SQPOLL) {
+		if (flags & IORING_ENTER_SQ_WAKEUP)
+			wake_up(&ctx->sqo_wait);
+		return to_submit;
+	}
+
 	submitted = ret = 0;
 	if (to_submit) {
 		submitted = io_ring_submit(ctx, to_submit);
@@ -1276,10 +1454,12 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
 	return ret;
 }
 
-static int io_sq_offload_start(struct io_ring_ctx *ctx)
+static int io_sq_offload_start(struct io_ring_ctx *ctx,
+			       struct io_uring_params *p)
 {
 	int ret;
 
+	init_waitqueue_head(&ctx->sqo_wait);
 	ctx->sqo_mm = current->mm;
 
 	/*
@@ -1292,6 +1472,34 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx)
 	if (!ctx->sqo_files)
 		goto err;
 
+	ctx->sq_thread_idle = msecs_to_jiffies(p->sq_thread_idle);
+	if (!ctx->sq_thread_idle)
+		ctx->sq_thread_idle = HZ;
+
+	if (ctx->flags & IORING_SETUP_SQPOLL) {
+		if (p->flags & IORING_SETUP_SQ_AFF) {
+			int cpu;
+
+			cpu = array_index_nospec(p->sq_thread_cpu, NR_CPUS);
+			ctx->sqo_thread = kthread_create_on_cpu(io_sq_thread,
+							ctx, cpu,
+							"io_uring-sq");
+		} else {
+			ctx->sqo_thread = kthread_create(io_sq_thread, ctx,
+							"io_uring-sq");
+		}
+		if (IS_ERR(ctx->sqo_thread)) {
+			ret = PTR_ERR(ctx->sqo_thread);
+			ctx->sqo_thread = NULL;
+			goto err;
+		}
+		wake_up_process(ctx->sqo_thread);
+	} else if (p->flags & IORING_SETUP_SQ_AFF) {
+		/* Can't have SQ_AFF without SQPOLL */
+		ret = -EINVAL;
+		goto err;
+	}
+
 	/* Do QD, or 2 * CPUS, whatever is smallest */
 	ctx->sqo_wq = alloc_workqueue("io_ring-wq", WQ_UNBOUND | WQ_FREEZABLE,
 			min(ctx->sq_entries - 1, 2 * num_online_cpus()));
@@ -1302,6 +1510,11 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx)
 
 	return 0;
 err:
+	if (ctx->sqo_thread) {
+		kthread_park(ctx->sqo_thread);
+		kthread_stop(ctx->sqo_thread);
+		ctx->sqo_thread = NULL;
+	}
 	if (ctx->sqo_files)
 		ctx->sqo_files = NULL;
 	ctx->sqo_mm = NULL;
@@ -1560,8 +1773,13 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, void __user *arg,
 
 static void io_ring_ctx_free(struct io_ring_ctx *ctx)
 {
+	if (ctx->sqo_thread) {
+		kthread_park(ctx->sqo_thread);
+		kthread_stop(ctx->sqo_thread);
+	}
 	destroy_workqueue(ctx->sqo_wq);
-	io_iopoll_reap_events(ctx);
+	if (!(ctx->flags & IORING_SETUP_SQPOLL))
+		io_iopoll_reap_events(ctx);
 	io_sqe_buffer_unregister(ctx);
 	io_sqe_files_unregister(ctx);
 
@@ -1602,7 +1820,8 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
 	percpu_ref_kill(&ctx->refs);
 	mutex_unlock(&ctx->uring_lock);
 
-	io_iopoll_reap_events(ctx);
+	if (!(ctx->flags & IORING_SETUP_SQPOLL))
+		io_iopoll_reap_events(ctx);
 	wait_for_completion(&ctx->ctx_done);
 	io_ring_ctx_free(ctx);
 }
@@ -1771,7 +1990,7 @@ static int io_uring_create(unsigned entries, struct io_uring_params *p)
 	if (ret)
 		goto err;
 
-	ret = io_sq_offload_start(ctx);
+	ret = io_sq_offload_start(ctx, p);
 	if (ret)
 		goto err;
 
@@ -1820,7 +2039,8 @@ static long io_uring_setup(u32 entries, struct io_uring_params __user *params)
 			return -EINVAL;
 	}
 
-	if (p.flags & ~IORING_SETUP_IOPOLL)
+	if (p.flags & ~(IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL |
+			IORING_SETUP_SQ_AFF))
 		return -EINVAL;
 
 	ret = io_uring_create(entries, &p);
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 8323320077ec..cb3592d618fe 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -44,6 +44,8 @@ struct io_uring_sqe {
  * io_uring_setup() flags
  */
 #define IORING_SETUP_IOPOLL	(1 << 0)	/* io_context is polled */
+#define IORING_SETUP_SQPOLL	(1 << 1)	/* SQ poll thread */
+#define IORING_SETUP_SQ_AFF	(1 << 2)	/* sq_thread_cpu is valid */
 
 #define IORING_OP_NOP		0
 #define IORING_OP_READV		1
@@ -87,6 +89,11 @@ struct io_sqring_offsets {
 	__u32 resv[3];
 };
 
+/*
+ * sq_ring->flags
+ */
+#define IORING_SQ_NEED_WAKEUP	(1 << 0) /* needs io_uring_enter wakeup */
+
 struct io_cqring_offsets {
 	__u32 head;
 	__u32 tail;
@@ -101,6 +108,7 @@ struct io_cqring_offsets {
  * io_uring_enter(2) flags
  */
 #define IORING_ENTER_GETEVENTS	(1 << 0)
+#define IORING_ENTER_SQ_WAKEUP	(1 << 1)
 
 /*
  * Passed in for io_uring_setup(2). Copied back with updated info on success
@@ -109,7 +117,9 @@ struct io_uring_params {
 	__u32 sq_entries;
 	__u32 cq_entries;
 	__u32 flags;
-	__u16 resv[10];
+	__u16 sq_thread_cpu;
+	__u16 sq_thread_idle;
+	__u16 resv[8];
 	struct io_sqring_offsets sq_off;
 	struct io_cqring_offsets cq_off;
 };
-- 
2.17.1

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 15/18] io_uring: add io_kiocb ref count
  2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
                   ` (13 preceding siblings ...)
  2019-01-28 21:35 ` [PATCH 14/18] io_uring: add submission polling Jens Axboe
@ 2019-01-28 21:35 ` Jens Axboe
  2019-01-29 17:26   ` Christoph Hellwig
  2019-01-28 21:35 ` [PATCH 16/18] io_uring: add support for IORING_OP_POLL Jens Axboe
                   ` (2 subsequent siblings)
  17 siblings, 1 reply; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi, Jens Axboe

We'll use this for the POLL implementation. Regular requests will
NOT be using references, so initialize it to 0. Any real use of
the io_kiocb ref will initialize it to at least 2.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 34e786eb2e2d..71bf890e25f2 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -139,6 +139,7 @@ struct io_kiocb {
 	struct io_ring_ctx	*ctx;
 	struct list_head	list;
 	unsigned int		flags;
+	refcount_t		refs;
 #define REQ_F_FORCE_NONBLOCK	1	/* inline submission attempt */
 #define REQ_F_IOPOLL_COMPLETED	2	/* polled IO has completed */
 #define REQ_F_IOPOLL_EAGAIN	4	/* submission got EAGAIN */
@@ -319,6 +320,7 @@ static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx,
 	if (req) {
 		req->ctx = ctx;
 		req->flags = 0;
+		refcount_set(&req->refs, 0);
 		return req;
 	}
 
@@ -338,8 +340,10 @@ static void io_free_req_many(struct io_ring_ctx *ctx, void **reqs, int *nr)
 
 static void io_free_req(struct io_kiocb *req)
 {
-	io_ring_drop_ctx_refs(req->ctx, 1);
-	kmem_cache_free(req_cachep, req);
+	if (!refcount_read(&req->refs) || refcount_dec_and_test(&req->refs)) {
+		io_ring_drop_ctx_refs(req->ctx, 1);
+		kmem_cache_free(req_cachep, req);
+	}
 }
 
 /*
-- 
2.17.1

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 16/18] io_uring: add support for IORING_OP_POLL
  2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
                   ` (14 preceding siblings ...)
  2019-01-28 21:35 ` [PATCH 15/18] io_uring: add io_kiocb ref count Jens Axboe
@ 2019-01-28 21:35 ` Jens Axboe
  2019-01-28 21:35 ` [PATCH 17/18] io_uring: allow workqueue item to handle multiple buffered requests Jens Axboe
  2019-01-28 21:35 ` [PATCH 18/18] io_uring: add io_uring_event cache hit information Jens Axboe
  17 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi, Jens Axboe

This is basically a direct port of bfe4037e722e, which implements a
one-shot poll command through aio. Description below is based on that
commit as well. However, instead of adding a POLL command and relying
on io_cancel(2) to remove it, we mimic the epoll(2) interface of
having a command to add a poll notification, IORING_OP_POLL_ADD,
and one to remove it again, IORING_OP_POLL_REMOVE.

To poll for a file descriptor the application should submit an sqe of
type IORING_OP_POLL. It will poll the fd for the events specified in the
poll_events field.

Unlike poll or epoll without EPOLLONESHOT this interface always works in
one shot mode, that is once the sqe is completed, it will have to be
resubmitted.

Based-on-code-from: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c                 | 245 ++++++++++++++++++++++++++++++++++
 include/uapi/linux/io_uring.h |   3 +
 2 files changed, 248 insertions(+)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 71bf890e25f2..13e9bb0ce44e 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -122,6 +122,7 @@ struct io_ring_ctx {
 		spinlock_t		completion_lock;
 		bool			poll_multi_file;
 		struct list_head	poll_list;
+		struct list_head	cancel_list;
 	} ____cacheline_aligned_in_smp;
 };
 
@@ -130,9 +131,19 @@ struct sqe_submit {
 	unsigned			index;
 };
 
+struct io_poll_iocb {
+	struct file *file;
+	struct wait_queue_head *head;
+	__poll_t events;
+	bool woken;
+	bool canceled;
+	struct wait_queue_entry wait;
+};
+
 struct io_kiocb {
 	union {
 		struct kiocb		rw;
+		struct io_poll_iocb	poll;
 		struct sqe_submit	submit;
 	};
 
@@ -204,6 +215,7 @@ static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
 	init_waitqueue_head(&ctx->wait);
 	spin_lock_init(&ctx->completion_lock);
 	INIT_LIST_HEAD(&ctx->poll_list);
+	INIT_LIST_HEAD(&ctx->cancel_list);
 	return ctx;
 }
 
@@ -914,6 +926,232 @@ static int io_fsync(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	return 0;
 }
 
+static void io_poll_remove_one(struct io_kiocb *req)
+{
+	struct io_poll_iocb *poll = &req->poll;
+
+	spin_lock(&poll->head->lock);
+	WRITE_ONCE(poll->canceled, true);
+	if (!list_empty(&poll->wait.entry)) {
+		list_del_init(&poll->wait.entry);
+		queue_work(req->ctx->sqo_wq, &req->work);
+	}
+	spin_unlock(&poll->head->lock);
+
+	list_del_init(&req->list);
+}
+
+static void io_poll_remove_all(struct io_ring_ctx *ctx)
+{
+	struct io_kiocb *req;
+
+	spin_lock_irq(&ctx->completion_lock);
+	while (!list_empty(&ctx->cancel_list)) {
+		req = list_first_entry(&ctx->cancel_list, struct io_kiocb,list);
+		io_poll_remove_one(req);
+	}
+	spin_unlock_irq(&ctx->completion_lock);
+}
+
+/*
+ * Find a running poll command that matches one specified in sqe->addr,
+ * and remove it if found.
+ */
+static int io_poll_remove(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+{
+	struct io_ring_ctx *ctx = req->ctx;
+	struct io_kiocb *poll_req, *next;
+	int ret = -ENOENT;
+
+	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+		return -EINVAL;
+	if (sqe->ioprio || sqe->off || sqe->len || sqe->buf_index ||
+	    sqe->poll_events)
+		return -EINVAL;
+
+	spin_lock_irq(&ctx->completion_lock);
+	list_for_each_entry_safe(poll_req, next, &ctx->cancel_list, list) {
+		if (sqe->addr == poll_req->user_data) {
+			io_poll_remove_one(poll_req);
+			ret = 0;
+			break;
+		}
+	}
+	spin_unlock_irq(&ctx->completion_lock);
+
+	io_cqring_add_event(req->ctx, sqe->user_data, ret, 0);
+	io_free_req(req);
+	return 0;
+}
+
+static void io_poll_complete(struct io_kiocb *req, __poll_t mask)
+{
+	io_cqring_add_event(req->ctx, req->user_data, mangle_poll(mask), 0);
+	io_fput(req);
+	io_free_req(req);
+}
+
+static void io_poll_complete_work(struct work_struct *work)
+{
+	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+	struct io_poll_iocb *poll = &req->poll;
+	struct poll_table_struct pt = { ._key = poll->events };
+	struct io_ring_ctx *ctx = req->ctx;
+	__poll_t mask = 0;
+
+	if (!READ_ONCE(poll->canceled))
+		mask = vfs_poll(poll->file, &pt) & poll->events;
+
+	/*
+	 * Note that ->ki_cancel callers also delete iocb from active_reqs after
+	 * calling ->ki_cancel.  We need the ctx_lock roundtrip here to
+	 * synchronize with them.  In the cancellation case the list_del_init
+	 * itself is not actually needed, but harmless so we keep it in to
+	 * avoid further branches in the fast path.
+	 */
+	spin_lock_irq(&ctx->completion_lock);
+	if (!mask && !READ_ONCE(poll->canceled)) {
+		add_wait_queue(poll->head, &poll->wait);
+		spin_unlock_irq(&ctx->completion_lock);
+		return;
+	}
+	list_del_init(&req->list);
+	spin_unlock_irq(&ctx->completion_lock);
+
+	io_poll_complete(req, mask);
+}
+
+static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+			void *key)
+{
+	struct io_poll_iocb *poll = container_of(wait, struct io_poll_iocb,
+							wait);
+	struct io_kiocb *req = container_of(poll, struct io_kiocb, poll);
+	struct io_ring_ctx *ctx = req->ctx;
+	__poll_t mask = key_to_poll(key);
+
+	poll->woken = true;
+
+	/* for instances that support it check for an event match first: */
+	if (mask) {
+		if (!(mask & poll->events))
+			return 0;
+
+		/* try to complete the iocb inline if we can: */
+		if (spin_trylock(&ctx->completion_lock)) {
+			list_del(&req->list);
+			spin_unlock(&ctx->completion_lock);
+
+			list_del_init(&poll->wait.entry);
+			io_poll_complete(req, mask);
+			return 1;
+		}
+	}
+
+	list_del_init(&poll->wait.entry);
+	queue_work(ctx->sqo_wq, &req->work);
+	return 1;
+}
+
+struct io_poll_table {
+	struct poll_table_struct pt;
+	struct io_kiocb *req;
+	int error;
+};
+
+static void io_poll_queue_proc(struct file *file, struct wait_queue_head *head,
+			       struct poll_table_struct *p)
+{
+	struct io_poll_table *pt = container_of(p, struct io_poll_table, pt);
+
+	if (unlikely(pt->req->poll.head)) {
+		pt->error = -EINVAL;
+		return;
+	}
+
+	pt->error = 0;
+	pt->req->poll.head = head;
+	add_wait_queue(head, &pt->req->poll.wait);
+}
+
+static int io_poll_add(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+{
+	struct io_poll_iocb *poll = &req->poll;
+	struct io_ring_ctx *ctx = req->ctx;
+	struct io_poll_table ipt;
+	__poll_t mask;
+
+	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+		return -EINVAL;
+	if (sqe->addr || sqe->ioprio || sqe->off || sqe->len || sqe->buf_index)
+		return -EINVAL;
+
+	INIT_WORK(&req->work, io_poll_complete_work);
+	poll->events = demangle_poll(sqe->poll_events) | EPOLLERR | EPOLLHUP;
+
+	if (sqe->flags & IOSQE_FIXED_FILE) {
+		if (unlikely(!ctx->user_files || sqe->fd >= ctx->nr_user_files))
+			return -EBADF;
+		poll->file = ctx->user_files[sqe->fd];
+		req->flags |= REQ_F_FIXED_FILE;
+	} else {
+		poll->file = fget(sqe->fd);
+	}
+	if (unlikely(!poll->file))
+		return -EBADF;
+
+	poll->head = NULL;
+	poll->woken = false;
+	poll->canceled = false;
+
+	ipt.pt._qproc = io_poll_queue_proc;
+	ipt.pt._key = poll->events;
+	ipt.req = req;
+	ipt.error = -EINVAL; /* same as no support for IOCB_CMD_POLL */
+
+	/* initialized the list so that we can do list_empty checks */
+	INIT_LIST_HEAD(&poll->wait.entry);
+	init_waitqueue_func_entry(&poll->wait, io_poll_wake);
+
+	/* one for removal from waitqueue, one for this function */
+	refcount_set(&req->refs, 2);
+
+	mask = vfs_poll(poll->file, &ipt.pt) & poll->events;
+	if (unlikely(!poll->head)) {
+		/* we did not manage to set up a waitqueue, done */
+		goto out;
+	}
+
+	spin_lock_irq(&ctx->completion_lock);
+	spin_lock(&poll->head->lock);
+	if (poll->woken) {
+		/* wake_up context handles the rest */
+		mask = 0;
+		ipt.error = 0;
+	} else if (mask || ipt.error) {
+		/* if we get an error or a mask we are done */
+		WARN_ON_ONCE(list_empty(&poll->wait.entry));
+		list_del_init(&poll->wait.entry);
+	} else {
+		/* actually waiting for an event */
+		list_add_tail(&req->list, &ctx->cancel_list);
+	}
+	spin_unlock(&poll->head->lock);
+	spin_unlock_irq(&ctx->completion_lock);
+
+out:
+	if (unlikely(ipt.error)) {
+		if (!(sqe->flags & IOSQE_FIXED_FILE))
+			fput(poll->file);
+		return ipt.error;
+	}
+
+	if (mask)
+		io_poll_complete(req, mask);
+	io_free_req(req);
+	return 0;
+}
+
 static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 			   struct sqe_submit *s, bool force_nonblock,
 			   struct io_submit_state *state)
@@ -949,6 +1187,12 @@ static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	case IORING_OP_FSYNC:
 		ret = io_fsync(req, sqe, force_nonblock);
 		break;
+	case IORING_OP_POLL_ADD:
+		ret = io_poll_add(req, sqe);
+		break;
+	case IORING_OP_POLL_REMOVE:
+		ret = io_poll_remove(req, sqe);
+		break;
 	default:
 		ret = -EINVAL;
 		break;
@@ -1824,6 +2068,7 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
 	percpu_ref_kill(&ctx->refs);
 	mutex_unlock(&ctx->uring_lock);
 
+	io_poll_remove_all(ctx);
 	if (!(ctx->flags & IORING_SETUP_SQPOLL))
 		io_iopoll_reap_events(ctx);
 	wait_for_completion(&ctx->ctx_done);
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index cb3592d618fe..39d3d3336dce 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -27,6 +27,7 @@ struct io_uring_sqe {
 	union {
 		__kernel_rwf_t	rw_flags;
 		__u32		fsync_flags;
+		__u16		poll_events;
 	};
 	__u64	user_data;	/* data to be passed back at completion time */
 	union {
@@ -53,6 +54,8 @@ struct io_uring_sqe {
 #define IORING_OP_FSYNC		3
 #define IORING_OP_READ_FIXED	4
 #define IORING_OP_WRITE_FIXED	5
+#define IORING_OP_POLL_ADD	6
+#define IORING_OP_POLL_REMOVE	7
 
 /*
  * sqe->fsync_flags
-- 
2.17.1

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 17/18] io_uring: allow workqueue item to handle multiple buffered requests
  2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
                   ` (15 preceding siblings ...)
  2019-01-28 21:35 ` [PATCH 16/18] io_uring: add support for IORING_OP_POLL Jens Axboe
@ 2019-01-28 21:35 ` Jens Axboe
  2019-01-28 21:35 ` [PATCH 18/18] io_uring: add io_uring_event cache hit information Jens Axboe
  17 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi, Jens Axboe

Right now we punt any buffered request that ends up triggering an
-EAGAIN to an async workqueue. This works fine in terms of providing
async execution of them, but it also can create quite a lot of work
queue items. For sequentially buffered IO, it's advantageous to
serialize the issue of them. For reads, the first one will trigger a
read-ahead, and subsequent request merely end up waiting on later pages
to complete. For writes, devices usually respond better to streamed
sequential writes.

Add state to track the last buffered request we punted to a work queue,
and if the next one is sequential to the previous, attempt to get the
previous work item to handle it. We limit the number of sequential
add-ons to the a multiple (8) of the max read-ahead size of the file.
This should be a good number for both reads and wries, as it defines the
max IO size the device can do directly.

This drastically cuts down on the number of context switches we need to
handle buffered sequential IO, and a basic test case of copying a big
file with io_uring sees a 5x speedup.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 231 ++++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 194 insertions(+), 37 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 13e9bb0ce44e..0240e0a04f6a 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -67,6 +67,16 @@ struct io_mapped_ubuf {
 	unsigned int	nr_bvecs;
 };
 
+struct async_list {
+	spinlock_t		lock;
+	atomic_t		cnt;
+	struct list_head	list;
+
+	struct file		*file;
+	off_t			io_end;
+	size_t			io_pages;
+};
+
 struct io_ring_ctx {
 	struct {
 		struct percpu_ref	refs;
@@ -124,6 +134,8 @@ struct io_ring_ctx {
 		struct list_head	poll_list;
 		struct list_head	cancel_list;
 	} ____cacheline_aligned_in_smp;
+
+	struct async_list	pending_async[2];
 };
 
 struct sqe_submit {
@@ -155,6 +167,7 @@ struct io_kiocb {
 #define REQ_F_IOPOLL_COMPLETED	2	/* polled IO has completed */
 #define REQ_F_IOPOLL_EAGAIN	4	/* submission got EAGAIN */
 #define REQ_F_FIXED_FILE	8	/* ctx owns file */
+#define REQ_F_SEQ_PREV		16	/* sequential with previous */
 	u64			user_data;
 	u64			res;
 
@@ -198,6 +211,7 @@ static void io_ring_ctx_ref_free(struct percpu_ref *ref)
 static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
 {
 	struct io_ring_ctx *ctx;
+	int i;
 
 	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
 	if (!ctx)
@@ -213,6 +227,11 @@ static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
 	init_completion(&ctx->ctx_done);
 	mutex_init(&ctx->uring_lock);
 	init_waitqueue_head(&ctx->wait);
+	for (i = 0; i < ARRAY_SIZE(ctx->pending_async); i++) {
+		spin_lock_init(&ctx->pending_async[i].lock);
+		INIT_LIST_HEAD(&ctx->pending_async[i].list);
+		atomic_set(&ctx->pending_async[i].cnt, 0);
+	}
 	spin_lock_init(&ctx->completion_lock);
 	INIT_LIST_HEAD(&ctx->poll_list);
 	INIT_LIST_HEAD(&ctx->cancel_list);
@@ -772,6 +791,39 @@ static int io_import_iovec(struct io_ring_ctx *ctx, int rw,
 	return import_iovec(rw, buf, sqe->len, UIO_FASTIOV, iovec, iter);
 }
 
+static void io_async_list_note(int rw, struct io_kiocb *req, size_t len)
+{
+	struct async_list *async_list = &req->ctx->pending_async[rw];
+	struct kiocb *kiocb = &req->rw;
+	struct file *filp = kiocb->ki_filp;
+	off_t io_end = kiocb->ki_pos + len;
+
+	if (filp == async_list->file && kiocb->ki_pos == async_list->io_end) {
+		unsigned long max_pages;
+
+		/* Use 8x RA size as a decent limiter for both reads/writes */
+		max_pages = filp->f_ra.ra_pages;
+		if (!max_pages)
+			max_pages = VM_MAX_READAHEAD >> (PAGE_SHIFT - 10);
+		max_pages *= 8;
+
+		len >>= PAGE_SHIFT;
+		if (async_list->io_pages + len <= max_pages) {
+			req->flags |= REQ_F_SEQ_PREV;
+			async_list->io_pages += len;
+		} else {
+			io_end = 0;
+			async_list->io_pages = 0;
+		}
+	}
+
+	if (async_list->file != filp) {
+		async_list->io_pages = 0;
+		async_list->file = filp;
+	}
+	async_list->io_end = io_end;
+}
+
 static ssize_t io_read(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 		       bool force_nonblock, struct io_submit_state *state)
 {
@@ -779,6 +831,7 @@ static ssize_t io_read(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	struct kiocb *kiocb = &req->rw;
 	struct iov_iter iter;
 	struct file *file;
+	size_t iov_count;
 	ssize_t ret;
 
 	ret = io_prep_rw(req, sqe, force_nonblock, state);
@@ -797,16 +850,19 @@ static ssize_t io_read(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	if (ret)
 		goto out_fput;
 
-	ret = rw_verify_area(READ, file, &kiocb->ki_pos, iov_iter_count(&iter));
+	iov_count = iov_iter_count(&iter);
+	ret = rw_verify_area(READ, file, &kiocb->ki_pos, iov_count);
 	if (!ret) {
 		ssize_t ret2;
 
 		/* Catch -EAGAIN return for forced non-blocking submission */
 		ret2 = call_read_iter(file, kiocb, &iter);
-		if (!force_nonblock || ret2 != -EAGAIN)
+		if (!force_nonblock || ret2 != -EAGAIN) {
 			io_rw_done(kiocb, ret2);
-		else
+		} else {
+			io_async_list_note(READ, req, iov_count);
 			ret = -EAGAIN;
+		}
 	}
 	kfree(iovec);
 out_fput:
@@ -822,6 +878,7 @@ static ssize_t io_write(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	struct kiocb *kiocb = &req->rw;
 	struct iov_iter iter;
 	struct file *file;
+	size_t iov_count;
 	ssize_t ret;
 
 	ret = io_prep_rw(req, sqe, force_nonblock, state);
@@ -829,10 +886,6 @@ static ssize_t io_write(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 		return ret;
 	file = kiocb->ki_filp;
 
-	ret = -EAGAIN;
-	if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT))
-		goto out_fput;
-
 	ret = -EBADF;
 	if (unlikely(!(file->f_mode & FMODE_WRITE)))
 		goto out_fput;
@@ -844,8 +897,15 @@ static ssize_t io_write(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	if (ret)
 		goto out_fput;
 
-	ret = rw_verify_area(WRITE, file, &kiocb->ki_pos,
-				iov_iter_count(&iter));
+	iov_count = iov_iter_count(&iter);
+
+	ret = -EAGAIN;
+	if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT)) {
+		io_async_list_note(WRITE, req, iov_count);
+		goto out_free;
+	}
+
+	ret = rw_verify_area(WRITE, file, &kiocb->ki_pos, iov_count);
 	if (!ret) {
 		/*
 		 * Open-code file_start_write here to grab freeze protection,
@@ -863,6 +923,7 @@ static ssize_t io_write(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 		kiocb->ki_flags |= IOCB_WRITE;
 		io_rw_done(kiocb, call_write_iter(file, kiocb, &iter));
 	}
+out_free:
 	kfree(iovec);
 out_fput:
 	if (unlikely(ret))
@@ -1210,6 +1271,21 @@ static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	return 0;
 }
 
+static struct async_list *io_async_list_from_sqe(struct io_ring_ctx *ctx,
+						 const struct io_uring_sqe *sqe)
+{
+	switch (sqe->opcode) {
+	case IORING_OP_READV:
+	case IORING_OP_READ_FIXED:
+		return &ctx->pending_async[READ];
+	case IORING_OP_WRITEV:
+	case IORING_OP_WRITE_FIXED:
+		return &ctx->pending_async[WRITE];
+	default:
+		return NULL;
+	}
+}
+
 static inline bool io_sqe_needs_user(const struct io_uring_sqe *sqe)
 {
 	return !(sqe->opcode == IORING_OP_READ_FIXED ||
@@ -1219,50 +1295,124 @@ static inline bool io_sqe_needs_user(const struct io_uring_sqe *sqe)
 static void io_sq_wq_submit_work(struct work_struct *work)
 {
 	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
-	struct sqe_submit *s = &req->submit;
-	u64 user_data = s->sqe->user_data;
 	struct io_ring_ctx *ctx = req->ctx;
+	struct mm_struct *cur_mm = NULL;
 	struct files_struct *old_files;
+	struct async_list *async_list;
+	LIST_HEAD(req_list);
 	mm_segment_t old_fs;
-	bool needs_user;
 	int ret;
 
-	 /* Ensure we clear previously set forced non-block flag */
-	req->flags &= ~REQ_F_FORCE_NONBLOCK;
-
 	old_files = current->files;
 	current->files = ctx->sqo_files;
 
+	async_list = io_async_list_from_sqe(ctx, req->submit.sqe);
+restart:
+	do {
+		struct sqe_submit *s = &req->submit;
+		u64 user_data = s->sqe->user_data;
+
+		/* Ensure we clear previously set forced non-block flag */
+		req->flags &= ~REQ_F_FORCE_NONBLOCK;
+
+		ret = 0;
+		if (io_sqe_needs_user(s->sqe) && !cur_mm) {
+			if (!mmget_not_zero(ctx->sqo_mm)) {
+				ret = -EFAULT;
+			} else {
+				cur_mm = ctx->sqo_mm;;
+				use_mm(ctx->sqo_mm);
+				old_fs = get_fs();
+				set_fs(USER_DS);
+			}
+		}
+
+		if (!ret)
+			ret = __io_submit_sqe(ctx, req, s, false, NULL);
+		if (ret) {
+			io_cqring_add_event(ctx, user_data, ret, 0);
+			io_free_req(req);
+		}
+		if (!async_list)
+			break;
+		if (!list_empty(&req_list)) {
+			req = list_first_entry(&req_list, struct io_kiocb,
+						list);
+			list_del(&req->list);
+			continue;
+		}
+		if (list_empty(&async_list->list))
+			break;
+
+		req = NULL;
+		spin_lock(&async_list->lock);
+		if (list_empty(&async_list->list)) {
+			spin_unlock(&async_list->lock);
+			break;
+		}
+		list_splice_init(&async_list->list, &req_list);
+		spin_unlock(&async_list->lock);
+
+		req = list_first_entry(&req_list, struct io_kiocb, list);
+		list_del(&req->list);
+	} while (req);
+
 	/*
-	 * If we're doing IO to fixed buffers, we don't need to get/set
-	 * user context
+	 * Rare case of racing with a submitter. If we find the count has
+	 * dropped to zero AND we have pending work items, then restart
+	 * the processing. This is a tiny race window.
 	 */
-	needs_user = io_sqe_needs_user(s->sqe);
-	if (needs_user) {
-		if (!mmget_not_zero(ctx->sqo_mm)) {
-			ret = -EFAULT;
-			goto err;
+	ret = atomic_dec_return(&async_list->cnt);
+	while (!ret && !list_empty(&async_list->list)) {
+		spin_lock(&async_list->lock);
+		atomic_inc(&async_list->cnt);
+		list_splice_init(&async_list->list, &req_list);
+		spin_unlock(&async_list->lock);
+
+		if (!list_empty(&req_list)) {
+			req = list_first_entry(&req_list, struct io_kiocb,
+						list);
+			list_del(&req->list);
+			goto restart;
 		}
-		use_mm(ctx->sqo_mm);
-		old_fs = get_fs();
-		set_fs(USER_DS);
+		ret = atomic_dec_return(&async_list->cnt);
 	}
 
-	ret = __io_submit_sqe(ctx, req, s, false, NULL);
-
-	if (needs_user) {
+	if (cur_mm) {
 		set_fs(old_fs);
-		unuse_mm(ctx->sqo_mm);
-		mmput(ctx->sqo_mm);
-	}
-err:
-	if (ret) {
-		io_cqring_add_event(ctx, user_data, ret, 0);
-		io_free_req(req);
+		unuse_mm(cur_mm);
+		mmput(cur_mm);
 	}
 	current->files = old_files;
 }
 
+/*
+ * See if we can piggy back onto previously submitted work, that is still
+ * running. We currently only allow this if the new request is sequential
+ * to the previous one we punted.
+ */
+static bool io_add_to_prev_work(struct async_list *list, struct io_kiocb *req)
+{
+	bool ret = false;
+
+	if (!list)
+		return false;
+	if (!(req->flags & REQ_F_SEQ_PREV))
+		return false;
+	if (!atomic_read(&list->cnt))
+		return false;
+
+	ret = true;
+	spin_lock(&list->lock);
+	list_add_tail(&req->list, &list->list);
+	if (!atomic_read(&list->cnt)) {
+		list_del_init(&req->list);
+		ret = false;
+	}
+	spin_unlock(&list->lock);
+	return ret;
+}
+
 static int io_submit_sqe(struct io_ring_ctx *ctx, struct sqe_submit *s,
 			 struct io_submit_state *state)
 {
@@ -1279,9 +1429,16 @@ static int io_submit_sqe(struct io_ring_ctx *ctx, struct sqe_submit *s,
 
 	ret = __io_submit_sqe(ctx, req, s, true, state);
 	if (ret == -EAGAIN) {
+		struct async_list *list;
+
+		list = io_async_list_from_sqe(ctx, s->sqe);
 		memcpy(&req->submit, s, sizeof(*s));
-		INIT_WORK(&req->work, io_sq_wq_submit_work);
-		queue_work(ctx->sqo_wq, &req->work);
+		if (!io_add_to_prev_work(list, req)) {
+			if (list)
+				atomic_inc(&list->cnt);
+			INIT_WORK(&req->work, io_sq_wq_submit_work);
+			queue_work(ctx->sqo_wq, &req->work);
+		}
 		ret = 0;
 	}
 	if (ret)
-- 
2.17.1

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH 18/18] io_uring: add io_uring_event cache hit information
  2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
                   ` (16 preceding siblings ...)
  2019-01-28 21:35 ` [PATCH 17/18] io_uring: allow workqueue item to handle multiple buffered requests Jens Axboe
@ 2019-01-28 21:35 ` Jens Axboe
  17 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:35 UTC (permalink / raw)
  To: linux-aio, linux-block, linux-man, linux-api; +Cc: hch, jmoyer, avi, Jens Axboe

Add hint on whether a read was served out of the page cache, or if it
hit media. This is useful for buffered async IO, O_DIRECT reads would
never have this set (for obvious reasons).

If the read hit page cache, cqe->flags will have IOCQE_FLAG_CACHEHIT
set.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c                 | 7 ++++++-
 include/uapi/linux/io_uring.h | 5 +++++
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 0240e0a04f6a..9b4b4f9d06a5 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -558,11 +558,16 @@ static void io_fput(struct io_kiocb *req)
 static void io_complete_rw(struct kiocb *kiocb, long res, long res2)
 {
 	struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw);
+	unsigned ev_flags = 0;
 
 	kiocb_end_write(kiocb);
 
 	io_fput(req);
-	io_cqring_add_event(req->ctx, req->user_data, res, 0);
+
+	if (res > 0 && (req->flags & REQ_F_FORCE_NONBLOCK))
+		ev_flags = IOCQE_FLAG_CACHEHIT;
+
+	io_cqring_add_event(req->ctx, req->user_data, res, ev_flags);
 	io_free_req(req);
 }
 
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 39d3d3336dce..589b6402081d 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -71,6 +71,11 @@ struct io_uring_cqe {
 	__u32	flags;
 };
 
+/*
+ * io_uring_event->flags
+ */
+#define IOCQE_FLAG_CACHEHIT	(1 << 0)	/* IO did not hit media */
+
 /*
  * Magic offsets for the application to mmap the data it needs
  */
-- 
2.17.1

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-28 21:35 ` [PATCH 05/18] Add io_uring IO interface Jens Axboe
@ 2019-01-28 21:53   ` Jeff Moyer
  2019-01-28 21:56     ` Jens Axboe
  2019-01-28 22:32   ` Jann Horn
                     ` (4 subsequent siblings)
  5 siblings, 1 reply; 62+ messages in thread
From: Jeff Moyer @ 2019-01-28 21:53 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-aio, linux-block, linux-man, linux-api, hch, avi

Jens Axboe <axboe@kernel.dk> writes:

> +static int __io_uring_enter(struct io_ring_ctx *ctx, unsigned to_submit,
> +			    unsigned min_complete, unsigned flags,
> +			    const sigset_t __user *sig, size_t sigsz)
> +{
> +	int submitted, ret;
> +
> +	submitted = ret = 0;
> +	if (to_submit) {
> +		submitted = io_ring_submit(ctx, to_submit);
> +		if (submitted < 0)
> +			return submitted;
> +	}
> +	if (flags & IORING_ENTER_GETEVENTS) {

Do we want to disallow any unsupported flags?

-Jeff

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-28 21:53   ` Jeff Moyer
@ 2019-01-28 21:56     ` Jens Axboe
  0 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 21:56 UTC (permalink / raw)
  To: Jeff Moyer; +Cc: linux-aio, linux-block, linux-man, linux-api, hch, avi

On 1/28/19 2:53 PM, Jeff Moyer wrote:
> Jens Axboe <axboe@kernel.dk> writes:
> 
>> +static int __io_uring_enter(struct io_ring_ctx *ctx, unsigned to_submit,
>> +			    unsigned min_complete, unsigned flags,
>> +			    const sigset_t __user *sig, size_t sigsz)
>> +{
>> +	int submitted, ret;
>> +
>> +	submitted = ret = 0;
>> +	if (to_submit) {
>> +		submitted = io_ring_submit(ctx, to_submit);
>> +		if (submitted < 0)
>> +			return submitted;
>> +	}
>> +	if (flags & IORING_ENTER_GETEVENTS) {
> 
> Do we want to disallow any unsupported flags?

Yes good point, we do that in other spots. Fixed.

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 09/18] io_uring: use fget/fput_many() for file references
  2019-01-28 21:35 ` [PATCH 09/18] io_uring: use fget/fput_many() for file references Jens Axboe
@ 2019-01-28 21:56   ` Jann Horn
  2019-01-28 22:03     ` Jens Axboe
  0 siblings, 1 reply; 62+ messages in thread
From: Jann Horn @ 2019-01-28 21:56 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On Mon, Jan 28, 2019 at 10:36 PM Jens Axboe <axboe@kernel.dk> wrote:
> Add a separate io_submit_state structure, to cache some of the things
> we need for IO submission.
>
> One such example is file reference batching. io_submit_state. We get as
> many references as the number of sqes we are submitting, and drop
> unused ones if we end up switching files. The assumption here is that
> we're usually only dealing with one fd, and if there are multiple,
> hopefuly they are at least somewhat ordered. Could trivially be extended
> to cover multiple fds, if needed.
>
> On the completion side we do the same thing, except this is trivially
> done just locally in io_iopoll_reap().
>
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
[...]
> +/*
> + * Get as many references to a file as we have IOs left in this submission,
> + * assuming most submissions are for one file, or at least that each file
> + * has more than one submission.
> + */
> +static struct file *io_file_get(struct io_submit_state *state, int fd)
> +{
> +       if (!state)
> +               return fget(fd);
> +
> +       if (state->file) {
> +               if (state->fd == fd) {
> +                       state->used_refs++;
> +                       state->ios_left--;
> +                       return state->file;
> +               }
> +               io_file_put(state, NULL);
> +       }
> +       state->file = fget_many(fd, state->ios_left);
> +       if (!state->file)
> +               return NULL;

This looks wrong.

Looking at "[PATCH 05/18] Add io_uring IO interface", as far as I can
tell, io_ring_submit() is called via __io_uring_enter() <-
sys_io_uring_enter() with an unchecked argument "unsigned int
to_submit" that is then, in this patch, stored in state->ios_left and
then used here. On a 32-bit platform, file->f_count is only 32 bits
wide, so I think you can then trivially overflow the reference count,
leading to use-after-free. Am I missing something?

> +       state->fd = fd;
> +       state->has_refs = state->ios_left;
> +       state->used_refs = 1;
> +       state->ios_left--;
> +       return state->file;
> +}
[...]
> +static void io_submit_state_start(struct io_submit_state *state,
> +                                 struct io_ring_ctx *ctx, unsigned max_ios)
> +{
> +       blk_start_plug(&state->plug);
> +       state->file = NULL;
> +       state->ios_left = max_ios;
> +}
> +
>  static void io_commit_sqring(struct io_ring_ctx *ctx)
>  {
>         struct io_sq_ring *ring = ctx->sq_ring;
> @@ -879,11 +974,13 @@ static bool io_get_sqring(struct io_ring_ctx *ctx, struct sqe_submit *s)
>
>  static int io_ring_submit(struct io_ring_ctx *ctx, unsigned int to_submit)
>  {
> +       struct io_submit_state state, *statep = NULL;
>         int i, ret = 0, submit = 0;
> -       struct blk_plug plug;
>
> -       if (to_submit > IO_PLUG_THRESHOLD)
> -               blk_start_plug(&plug);
> +       if (to_submit > IO_PLUG_THRESHOLD) {
> +               io_submit_state_start(&state, ctx, to_submit);
> +               statep = &state;
> +       }
[...]

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 09/18] io_uring: use fget/fput_many() for file references
  2019-01-28 21:56   ` Jann Horn
@ 2019-01-28 22:03     ` Jens Axboe
  0 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 22:03 UTC (permalink / raw)
  To: Jann Horn
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On 1/28/19 2:56 PM, Jann Horn wrote:
> On Mon, Jan 28, 2019 at 10:36 PM Jens Axboe <axboe@kernel.dk> wrote:
>> Add a separate io_submit_state structure, to cache some of the things
>> we need for IO submission.
>>
>> One such example is file reference batching. io_submit_state. We get as
>> many references as the number of sqes we are submitting, and drop
>> unused ones if we end up switching files. The assumption here is that
>> we're usually only dealing with one fd, and if there are multiple,
>> hopefuly they are at least somewhat ordered. Could trivially be extended
>> to cover multiple fds, if needed.
>>
>> On the completion side we do the same thing, except this is trivially
>> done just locally in io_iopoll_reap().
>>
>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>> ---
> [...]
>> +/*
>> + * Get as many references to a file as we have IOs left in this submission,
>> + * assuming most submissions are for one file, or at least that each file
>> + * has more than one submission.
>> + */
>> +static struct file *io_file_get(struct io_submit_state *state, int fd)
>> +{
>> +       if (!state)
>> +               return fget(fd);
>> +
>> +       if (state->file) {
>> +               if (state->fd == fd) {
>> +                       state->used_refs++;
>> +                       state->ios_left--;
>> +                       return state->file;
>> +               }
>> +               io_file_put(state, NULL);
>> +       }
>> +       state->file = fget_many(fd, state->ios_left);
>> +       if (!state->file)
>> +               return NULL;
> 
> This looks wrong.
> 
> Looking at "[PATCH 05/18] Add io_uring IO interface", as far as I can
> tell, io_ring_submit() is called via __io_uring_enter() <-
> sys_io_uring_enter() with an unchecked argument "unsigned int
> to_submit" that is then, in this patch, stored in state->ios_left and
> then used here. On a 32-bit platform, file->f_count is only 32 bits
> wide, so I think you can then trivially overflow the reference count,
> leading to use-after-free. Am I missing something?

No, that is possible. Since we cap the SQ ring entries at 4k, I think
we should just validate to_submit/min_complete against those numbers.
That would also solve this overflow.

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-28 21:35 ` [PATCH 05/18] Add io_uring IO interface Jens Axboe
  2019-01-28 21:53   ` Jeff Moyer
@ 2019-01-28 22:32   ` Jann Horn
  2019-01-28 23:46     ` Jens Axboe
  2019-01-29  1:07   ` Jann Horn
                     ` (3 subsequent siblings)
  5 siblings, 1 reply; 62+ messages in thread
From: Jann Horn @ 2019-01-28 22:32 UTC (permalink / raw)
  To: Jens Axboe, Al Viro
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On Mon, Jan 28, 2019 at 10:35 PM Jens Axboe <axboe@kernel.dk> wrote:
> The submission queue (SQ) and completion queue (CQ) rings are shared
> between the application and the kernel. This eliminates the need to
> copy data back and forth to submit and complete IO.
>
> IO submissions use the io_uring_sqe data structure, and completions
> are generated in the form of io_uring_sqe data structures. The SQ
> ring is an index into the io_uring_sqe array, which makes it possible
> to submit a batch of IOs without them being contiguous in the ring.
> The CQ ring is always contiguous, as completion events are inherently
> unordered, and hence any io_uring_cqe entry can point back to an
> arbitrary submission.
>
> Two new system calls are added for this:
>
> io_uring_setup(entries, params)
>         Sets up a context for doing async IO. On success, returns a file
>         descriptor that the application can mmap to gain access to the
>         SQ ring, CQ ring, and io_uring_sqes.
>
> io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
>         Initiates IO against the rings mapped to this fd, or waits for
>         them to complete, or both. The behavior is controlled by the
>         parameters passed in. If 'to_submit' is non-zero, then we'll
>         try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
>         kernel will wait for 'min_complete' events, if they aren't
>         already available. It's valid to set IORING_ENTER_GETEVENTS
>         and 'min_complete' == 0 at the same time, this allows the
>         kernel to return already completed events without waiting
>         for them. This is useful only for polling, as for IRQ
>         driven IO, the application can just check the CQ ring
>         without entering the kernel.
>
> With this setup, it's possible to do async IO with a single system
> call. Future developments will enable polled IO with this interface,
> and polled submission as well. The latter will enable an application
> to do IO without doing ANY system calls at all.
>
> For IRQ driven IO, an application only needs to enter the kernel for
> completions if it wants to wait for them to occur.
>
> Each io_uring is backed by a workqueue, to support buffered async IO
> as well. We will only punt to an async context if the command would
> need to wait for IO on the device side. Any data that can be accessed
> directly in the page cache is done inline. This avoids the slowness
> issue of usual threadpools, since cached data is accessed as quickly
> as a sync interface.
>
> Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
[...]
> +static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
> +                     bool force_nonblock)
> +{
> +       struct kiocb *kiocb = &req->rw;
> +       int ret;
> +
> +       kiocb->ki_filp = fget(sqe->fd);
> +       if (unlikely(!kiocb->ki_filp))
> +               return -EBADF;
> +       kiocb->ki_pos = sqe->off;
> +       kiocb->ki_flags = iocb_flags(kiocb->ki_filp);
> +       kiocb->ki_hint = ki_hint_validate(file_write_hint(kiocb->ki_filp));
> +       if (sqe->ioprio) {
> +               ret = ioprio_check_cap(sqe->ioprio);
> +               if (ret)
> +                       goto out_fput;
> +
> +               kiocb->ki_ioprio = sqe->ioprio;
> +       } else
> +               kiocb->ki_ioprio = get_current_ioprio();
> +
> +       ret = kiocb_set_rw_flags(kiocb, sqe->rw_flags);
> +       if (unlikely(ret))
> +               goto out_fput;
> +       if (force_nonblock) {
> +               kiocb->ki_flags |= IOCB_NOWAIT;
> +               req->flags |= REQ_F_FORCE_NONBLOCK;
> +       }
> +       if (kiocb->ki_flags & IOCB_HIPRI) {
> +               ret = -EINVAL;
> +               goto out_fput;
> +       }
> +
> +       kiocb->ki_complete = io_complete_rw;
> +       return 0;
> +out_fput:
> +       fput(kiocb->ki_filp);
> +       return ret;
> +}
[...]
> +static ssize_t io_read(struct io_kiocb *req, const struct io_uring_sqe *sqe,
> +                      bool force_nonblock)
> +{
> +       struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
> +       struct kiocb *kiocb = &req->rw;
> +       struct iov_iter iter;
> +       struct file *file;
> +       ssize_t ret;
> +
> +       ret = io_prep_rw(req, sqe, force_nonblock);
> +       if (ret)
> +               return ret;
> +       file = kiocb->ki_filp;
> +
> +       ret = -EBADF;
> +       if (unlikely(!(file->f_mode & FMODE_READ)))
> +               goto out_fput;
> +       ret = -EINVAL;
> +       if (unlikely(!file->f_op->read_iter))
> +               goto out_fput;
> +
> +       ret = io_import_iovec(req->ctx, READ, sqe, &iovec, &iter);
> +       if (ret)
> +               goto out_fput;
> +
> +       ret = rw_verify_area(READ, file, &kiocb->ki_pos, iov_iter_count(&iter));
> +       if (!ret) {
> +               ssize_t ret2;
> +
> +               /* Catch -EAGAIN return for forced non-blocking submission */
> +               ret2 = call_read_iter(file, kiocb, &iter);
> +               if (!force_nonblock || ret2 != -EAGAIN)
> +                       io_rw_done(kiocb, ret2);
> +               else
> +                       ret = -EAGAIN;
> +       }
> +       kfree(iovec);
> +out_fput:
> +       if (unlikely(ret))
> +               fput(file);
> +       return ret;
> +}
[...]
> +static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
> +                          struct sqe_submit *s, bool force_nonblock)
> +{
> +       const struct io_uring_sqe *sqe = s->sqe;
> +       ssize_t ret;
> +
> +       if (unlikely(s->index >= ctx->sq_entries))
> +               return -EINVAL;
> +       req->user_data = sqe->user_data;
> +
> +       ret = -EINVAL;
> +       switch (sqe->opcode) {
> +       case IORING_OP_NOP:
> +               ret = io_nop(req, sqe);
> +               break;
> +       case IORING_OP_READV:
> +               ret = io_read(req, sqe, force_nonblock);
> +               break;
> +       case IORING_OP_WRITEV:
> +               ret = io_write(req, sqe, force_nonblock);
> +               break;
> +       default:
> +               ret = -EINVAL;
> +               break;
> +       }
> +
> +       return ret;
> +}
> +
> +static void io_sq_wq_submit_work(struct work_struct *work)
> +{
> +       struct io_kiocb *req = container_of(work, struct io_kiocb, work);
> +       struct sqe_submit *s = &req->submit;
> +       u64 user_data = s->sqe->user_data;
> +       struct io_ring_ctx *ctx = req->ctx;
> +       mm_segment_t old_fs = get_fs();
> +       struct files_struct *old_files;
> +       int ret;
> +
> +        /* Ensure we clear previously set forced non-block flag */
> +       req->flags &= ~REQ_F_FORCE_NONBLOCK;
> +
> +       old_files = current->files;
> +       current->files = ctx->sqo_files;

I think you're not supposed to twiddle with current->files without
holding task_lock(current).

> +       if (!mmget_not_zero(ctx->sqo_mm)) {
> +               ret = -EFAULT;
> +               goto err;
> +       }
> +
> +       use_mm(ctx->sqo_mm);
> +       set_fs(USER_DS);
> +
> +       ret = __io_submit_sqe(ctx, req, s, false);
> +
> +       set_fs(old_fs);
> +       unuse_mm(ctx->sqo_mm);
> +       mmput(ctx->sqo_mm);
> +err:
> +       if (ret) {
> +               io_cqring_add_event(ctx, user_data, ret, 0);
> +               io_free_req(req);
> +       }
> +       current->files = old_files;
> +}
[...]
> +static int io_sq_offload_start(struct io_ring_ctx *ctx)
> +{
> +       int ret;
> +
> +       ctx->sqo_mm = current->mm;

What keeps this thing alive?

> +       /*
> +        * This is safe since 'current' has the fd installed, and if that gets
> +        * closed on exit, then fops->release() is invoked which waits for the
> +        * async contexts to flush and exit before exiting.
> +        */
> +       ret = -EBADF;
> +       ctx->sqo_files = current->files;
> +       if (!ctx->sqo_files)
> +               goto err;

That's gnarly. Adding Al Viro to the thread.

I think you misunderstand the semantics of f_op->release. The ->flush
handler is invoked whenever a file descriptor is closed through
filp_close() (via deletion of the files_struct, sys_close(),
sys_dup2(), ...), so if you had used that one, _maybe_ this would
work. But the ->release handler only runs when the _last_ reference to
a struct file has been dropped - so you can, for example, fork() a
child, then exit() in the parent, and the ->release handler isn't
invoked. So I don't see how this can work.

But even if you had abused ->flush for this instead: close_files()
currently has a comment in it that claims that "this is the last
reference to the files structure"; this change would make that claim
untrue.

> +       /* Do QD, or 2 * CPUS, whatever is smallest */
> +       ctx->sqo_wq = alloc_workqueue("io_ring-wq", WQ_UNBOUND | WQ_FREEZABLE,
> +                       min(ctx->sq_entries - 1, 2 * num_online_cpus()));
> +       if (!ctx->sqo_wq) {
> +               ret = -ENOMEM;
> +               goto err;
> +       }
> +
> +       return 0;
> +err:
> +       if (ctx->sqo_files)
> +               ctx->sqo_files = NULL;
> +       ctx->sqo_mm = NULL;
> +       return ret;
> +}
[...]
> +static const struct file_operations io_uring_fops = {
> +       .release        = io_uring_release,
> +       .mmap           = io_uring_mmap,
> +       .poll           = io_uring_poll,
> +       .fasync         = io_uring_fasync,
> +};
[...]

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 12/18] io_uring: add support for pre-mapped user IO buffers
  2019-01-28 21:35 ` [PATCH 12/18] io_uring: add support for pre-mapped user IO buffers Jens Axboe
@ 2019-01-28 23:35   ` Jann Horn
  2019-01-28 23:50     ` Jens Axboe
  0 siblings, 1 reply; 62+ messages in thread
From: Jann Horn @ 2019-01-28 23:35 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On Mon, Jan 28, 2019 at 10:36 PM Jens Axboe <axboe@kernel.dk> wrote:
> If we have fixed user buffers, we can map them into the kernel when we
> setup the io_context. That avoids the need to do get_user_pages() for
> each and every IO.
[...]
> +static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
> +                              void __user *arg, unsigned nr_args)
> +{
> +       int ret;
> +
> +       /* Drop our initial ref and wait for the ctx to be fully idle */
> +       percpu_ref_put(&ctx->refs);

The line above drops a reference that you just got in the caller...

> +       percpu_ref_kill(&ctx->refs);
> +       wait_for_completion(&ctx->ctx_done);
> +
> +       switch (opcode) {
> +       case IORING_REGISTER_BUFFERS:
> +               ret = io_sqe_buffer_register(ctx, arg, nr_args);
> +               break;
> +       case IORING_UNREGISTER_BUFFERS:
> +               ret = -EINVAL;
> +               if (arg || nr_args)
> +                       break;
> +               ret = io_sqe_buffer_unregister(ctx);
> +               break;
> +       default:
> +               ret = -EINVAL;
> +               break;
> +       }
> +
> +       /* bring the ctx back to life */
> +       reinit_completion(&ctx->ctx_done);
> +       percpu_ref_resurrect(&ctx->refs);
> +       percpu_ref_get(&ctx->refs);

And then this line takes a reference that the caller will immediately
drop again? Why?

> +       return ret;
> +}
> +
> +SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
> +               void __user *, arg, unsigned int, nr_args)
> +{
> +       struct io_ring_ctx *ctx;
> +       long ret = -EBADF;
> +       struct fd f;
> +
> +       f = fdget(fd);
> +       if (!f.file)
> +               return -EBADF;
> +
> +       ret = -EOPNOTSUPP;
> +       if (f.file->f_op != &io_uring_fops)
> +               goto out_fput;
> +
> +       ret = -ENXIO;
> +       ctx = f.file->private_data;
> +       if (!percpu_ref_tryget(&ctx->refs))
> +               goto out_fput;

If you are holding the uring_lock of a ctx that can be accessed
through a file descriptor (which you do just after this point), you
know that the percpu_ref isn't zero, right? Why are you doing the
tryget here?

> +       ret = -EBUSY;
> +       if (mutex_trylock(&ctx->uring_lock)) {
> +               ret = __io_uring_register(ctx, opcode, arg, nr_args);
> +               mutex_unlock(&ctx->uring_lock);
> +       }
> +       io_ring_drop_ctx_refs(ctx, 1);
> +out_fput:
> +       fdput(f);
> +       return ret;
> +}

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-28 22:32   ` Jann Horn
@ 2019-01-28 23:46     ` Jens Axboe
  2019-01-28 23:59       ` Jann Horn
  0 siblings, 1 reply; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 23:46 UTC (permalink / raw)
  To: Jann Horn, Al Viro
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On 1/28/19 3:32 PM, Jann Horn wrote:
> On Mon, Jan 28, 2019 at 10:35 PM Jens Axboe <axboe@kernel.dk> wrote:
>> The submission queue (SQ) and completion queue (CQ) rings are shared
>> between the application and the kernel. This eliminates the need to
>> copy data back and forth to submit and complete IO.
>>
>> IO submissions use the io_uring_sqe data structure, and completions
>> are generated in the form of io_uring_sqe data structures. The SQ
>> ring is an index into the io_uring_sqe array, which makes it possible
>> to submit a batch of IOs without them being contiguous in the ring.
>> The CQ ring is always contiguous, as completion events are inherently
>> unordered, and hence any io_uring_cqe entry can point back to an
>> arbitrary submission.
>>
>> Two new system calls are added for this:
>>
>> io_uring_setup(entries, params)
>>         Sets up a context for doing async IO. On success, returns a file
>>         descriptor that the application can mmap to gain access to the
>>         SQ ring, CQ ring, and io_uring_sqes.
>>
>> io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
>>         Initiates IO against the rings mapped to this fd, or waits for
>>         them to complete, or both. The behavior is controlled by the
>>         parameters passed in. If 'to_submit' is non-zero, then we'll
>>         try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
>>         kernel will wait for 'min_complete' events, if they aren't
>>         already available. It's valid to set IORING_ENTER_GETEVENTS
>>         and 'min_complete' == 0 at the same time, this allows the
>>         kernel to return already completed events without waiting
>>         for them. This is useful only for polling, as for IRQ
>>         driven IO, the application can just check the CQ ring
>>         without entering the kernel.
>>
>> With this setup, it's possible to do async IO with a single system
>> call. Future developments will enable polled IO with this interface,
>> and polled submission as well. The latter will enable an application
>> to do IO without doing ANY system calls at all.
>>
>> For IRQ driven IO, an application only needs to enter the kernel for
>> completions if it wants to wait for them to occur.
>>
>> Each io_uring is backed by a workqueue, to support buffered async IO
>> as well. We will only punt to an async context if the command would
>> need to wait for IO on the device side. Any data that can be accessed
>> directly in the page cache is done inline. This avoids the slowness
>> issue of usual threadpools, since cached data is accessed as quickly
>> as a sync interface.
>>
>> Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
> [...]
>> +static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
>> +                     bool force_nonblock)
>> +{
>> +       struct kiocb *kiocb = &req->rw;
>> +       int ret;
>> +
>> +       kiocb->ki_filp = fget(sqe->fd);
>> +       if (unlikely(!kiocb->ki_filp))
>> +               return -EBADF;
>> +       kiocb->ki_pos = sqe->off;
>> +       kiocb->ki_flags = iocb_flags(kiocb->ki_filp);
>> +       kiocb->ki_hint = ki_hint_validate(file_write_hint(kiocb->ki_filp));
>> +       if (sqe->ioprio) {
>> +               ret = ioprio_check_cap(sqe->ioprio);
>> +               if (ret)
>> +                       goto out_fput;
>> +
>> +               kiocb->ki_ioprio = sqe->ioprio;
>> +       } else
>> +               kiocb->ki_ioprio = get_current_ioprio();
>> +
>> +       ret = kiocb_set_rw_flags(kiocb, sqe->rw_flags);
>> +       if (unlikely(ret))
>> +               goto out_fput;
>> +       if (force_nonblock) {
>> +               kiocb->ki_flags |= IOCB_NOWAIT;
>> +               req->flags |= REQ_F_FORCE_NONBLOCK;
>> +       }
>> +       if (kiocb->ki_flags & IOCB_HIPRI) {
>> +               ret = -EINVAL;
>> +               goto out_fput;
>> +       }
>> +
>> +       kiocb->ki_complete = io_complete_rw;
>> +       return 0;
>> +out_fput:
>> +       fput(kiocb->ki_filp);
>> +       return ret;
>> +}
> [...]
>> +static ssize_t io_read(struct io_kiocb *req, const struct io_uring_sqe *sqe,
>> +                      bool force_nonblock)
>> +{
>> +       struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
>> +       struct kiocb *kiocb = &req->rw;
>> +       struct iov_iter iter;
>> +       struct file *file;
>> +       ssize_t ret;
>> +
>> +       ret = io_prep_rw(req, sqe, force_nonblock);
>> +       if (ret)
>> +               return ret;
>> +       file = kiocb->ki_filp;
>> +
>> +       ret = -EBADF;
>> +       if (unlikely(!(file->f_mode & FMODE_READ)))
>> +               goto out_fput;
>> +       ret = -EINVAL;
>> +       if (unlikely(!file->f_op->read_iter))
>> +               goto out_fput;
>> +
>> +       ret = io_import_iovec(req->ctx, READ, sqe, &iovec, &iter);
>> +       if (ret)
>> +               goto out_fput;
>> +
>> +       ret = rw_verify_area(READ, file, &kiocb->ki_pos, iov_iter_count(&iter));
>> +       if (!ret) {
>> +               ssize_t ret2;
>> +
>> +               /* Catch -EAGAIN return for forced non-blocking submission */
>> +               ret2 = call_read_iter(file, kiocb, &iter);
>> +               if (!force_nonblock || ret2 != -EAGAIN)
>> +                       io_rw_done(kiocb, ret2);
>> +               else
>> +                       ret = -EAGAIN;
>> +       }
>> +       kfree(iovec);
>> +out_fput:
>> +       if (unlikely(ret))
>> +               fput(file);
>> +       return ret;
>> +}
> [...]
>> +static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>> +                          struct sqe_submit *s, bool force_nonblock)
>> +{
>> +       const struct io_uring_sqe *sqe = s->sqe;
>> +       ssize_t ret;
>> +
>> +       if (unlikely(s->index >= ctx->sq_entries))
>> +               return -EINVAL;
>> +       req->user_data = sqe->user_data;
>> +
>> +       ret = -EINVAL;
>> +       switch (sqe->opcode) {
>> +       case IORING_OP_NOP:
>> +               ret = io_nop(req, sqe);
>> +               break;
>> +       case IORING_OP_READV:
>> +               ret = io_read(req, sqe, force_nonblock);
>> +               break;
>> +       case IORING_OP_WRITEV:
>> +               ret = io_write(req, sqe, force_nonblock);
>> +               break;
>> +       default:
>> +               ret = -EINVAL;
>> +               break;
>> +       }
>> +
>> +       return ret;
>> +}
>> +
>> +static void io_sq_wq_submit_work(struct work_struct *work)
>> +{
>> +       struct io_kiocb *req = container_of(work, struct io_kiocb, work);
>> +       struct sqe_submit *s = &req->submit;
>> +       u64 user_data = s->sqe->user_data;
>> +       struct io_ring_ctx *ctx = req->ctx;
>> +       mm_segment_t old_fs = get_fs();
>> +       struct files_struct *old_files;
>> +       int ret;
>> +
>> +        /* Ensure we clear previously set forced non-block flag */
>> +       req->flags &= ~REQ_F_FORCE_NONBLOCK;
>> +
>> +       old_files = current->files;
>> +       current->files = ctx->sqo_files;
> 
> I think you're not supposed to twiddle with current->files without
> holding task_lock(current).

'current' is the work queue item in this case, do we need to protect
against anything else? I can add the locking around the assignments
(both places).

>> +       if (!mmget_not_zero(ctx->sqo_mm)) {
>> +               ret = -EFAULT;
>> +               goto err;
>> +       }
>> +
>> +       use_mm(ctx->sqo_mm);
>> +       set_fs(USER_DS);
>> +
>> +       ret = __io_submit_sqe(ctx, req, s, false);
>> +
>> +       set_fs(old_fs);
>> +       unuse_mm(ctx->sqo_mm);
>> +       mmput(ctx->sqo_mm);
>> +err:
>> +       if (ret) {
>> +               io_cqring_add_event(ctx, user_data, ret, 0);
>> +               io_free_req(req);
>> +       }
>> +       current->files = old_files;
>> +}
> [...]
>> +static int io_sq_offload_start(struct io_ring_ctx *ctx)
>> +{
>> +       int ret;
>> +
>> +       ctx->sqo_mm = current->mm;
> 
> What keeps this thing alive?

I think we're deadling with the same thing as the files below, I'll
defer to that.

>> +       /*
>> +        * This is safe since 'current' has the fd installed, and if that gets
>> +        * closed on exit, then fops->release() is invoked which waits for the
>> +        * async contexts to flush and exit before exiting.
>> +        */
>> +       ret = -EBADF;
>> +       ctx->sqo_files = current->files;
>> +       if (!ctx->sqo_files)
>> +               goto err;
> 
> That's gnarly. Adding Al Viro to the thread.
> 
> I think you misunderstand the semantics of f_op->release. The ->flush
> handler is invoked whenever a file descriptor is closed through
> filp_close() (via deletion of the files_struct, sys_close(),
> sys_dup2(), ...), so if you had used that one, _maybe_ this would
> work. But the ->release handler only runs when the _last_ reference to
> a struct file has been dropped - so you can, for example, fork() a
> child, then exit() in the parent, and the ->release handler isn't
> invoked. So I don't see how this can work.

The anonfd is CLOEXEC. The idea is exactly that it only runs when the
last reference to the file has been dropped. Not sure why you think I
need ->flush() here?

> But even if you had abused ->flush for this instead: close_files()
> currently has a comment in it that claims that "this is the last
> reference to the files structure"; this change would make that claim
> untrue.

Let me see if I can explain my intent better than that comment... We
know the parent who set up the io_uring instance will be around for as
long as io_uring instance persists. When we are tearing down the
io_uring, then we wait for any async contexts (like the one above) to
exit. Once they are exited, it's safe to proceed with the exit and
teardown ->files[].

That's the idea... Not trying to be clever, some of this dates back to
the aio weirdness where it was impossible to have cross references like
this, since it would lead to teardown deadlocks with how exit_aio() is
used. I can probably grab a struct files reference above, but currently
I don't see why it's needed.

>> +       /* Do QD, or 2 * CPUS, whatever is smallest */
>> +       ctx->sqo_wq = alloc_workqueue("io_ring-wq", WQ_UNBOUND | WQ_FREEZABLE,
>> +                       min(ctx->sq_entries - 1, 2 * num_online_cpus()));
>> +       if (!ctx->sqo_wq) {
>> +               ret = -ENOMEM;
>> +               goto err;
>> +       }
>> +
>> +       return 0;
>> +err:
>> +       if (ctx->sqo_files)
>> +               ctx->sqo_files = NULL;
>> +       ctx->sqo_mm = NULL;
>> +       return ret;
>> +}
> [...]
>> +static const struct file_operations io_uring_fops = {
>> +       .release        = io_uring_release,
>> +       .mmap           = io_uring_mmap,
>> +       .poll           = io_uring_poll,
>> +       .fasync         = io_uring_fasync,
>> +};
> [...]
> 


-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 12/18] io_uring: add support for pre-mapped user IO buffers
  2019-01-28 23:35   ` Jann Horn
@ 2019-01-28 23:50     ` Jens Axboe
  2019-01-29  0:36       ` Jann Horn
  0 siblings, 1 reply; 62+ messages in thread
From: Jens Axboe @ 2019-01-28 23:50 UTC (permalink / raw)
  To: Jann Horn
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On 1/28/19 4:35 PM, Jann Horn wrote:
> On Mon, Jan 28, 2019 at 10:36 PM Jens Axboe <axboe@kernel.dk> wrote:
>> If we have fixed user buffers, we can map them into the kernel when we
>> setup the io_context. That avoids the need to do get_user_pages() for
>> each and every IO.
> [...]
>> +static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
>> +                              void __user *arg, unsigned nr_args)
>> +{
>> +       int ret;
>> +
>> +       /* Drop our initial ref and wait for the ctx to be fully idle */
>> +       percpu_ref_put(&ctx->refs);
> 
> The line above drops a reference that you just got in the caller...

Right

>> +       percpu_ref_kill(&ctx->refs);
>> +       wait_for_completion(&ctx->ctx_done);
>> +
>> +       switch (opcode) {
>> +       case IORING_REGISTER_BUFFERS:
>> +               ret = io_sqe_buffer_register(ctx, arg, nr_args);
>> +               break;
>> +       case IORING_UNREGISTER_BUFFERS:
>> +               ret = -EINVAL;
>> +               if (arg || nr_args)
>> +                       break;
>> +               ret = io_sqe_buffer_unregister(ctx);
>> +               break;
>> +       default:
>> +               ret = -EINVAL;
>> +               break;
>> +       }
>> +
>> +       /* bring the ctx back to life */
>> +       reinit_completion(&ctx->ctx_done);
>> +       percpu_ref_resurrect(&ctx->refs);
>> +       percpu_ref_get(&ctx->refs);
> 
> And then this line takes a reference that the caller will immediately
> drop again? Why?

Just want to keep it symmetric and avoid having weird "this function drops
a reference" use cases.

> 
>> +       return ret;
>> +}
>> +
>> +SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
>> +               void __user *, arg, unsigned int, nr_args)
>> +{
>> +       struct io_ring_ctx *ctx;
>> +       long ret = -EBADF;
>> +       struct fd f;
>> +
>> +       f = fdget(fd);
>> +       if (!f.file)
>> +               return -EBADF;
>> +
>> +       ret = -EOPNOTSUPP;
>> +       if (f.file->f_op != &io_uring_fops)
>> +               goto out_fput;
>> +
>> +       ret = -ENXIO;
>> +       ctx = f.file->private_data;
>> +       if (!percpu_ref_tryget(&ctx->refs))
>> +               goto out_fput;
> 
> If you are holding the uring_lock of a ctx that can be accessed
> through a file descriptor (which you do just after this point), you
> know that the percpu_ref isn't zero, right? Why are you doing the
> tryget here?

Not sure I follow... We don't hold the lock at this point. I guess your
point is that since the descriptor is open (or we'd fail the above
check), then there's no point doing the tryget variant here? That's
strictly true, that could just be a get().

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-28 23:46     ` Jens Axboe
@ 2019-01-28 23:59       ` Jann Horn
  2019-01-29  0:03         ` Jens Axboe
  2019-02-01 16:57         ` Matt Mullins
  0 siblings, 2 replies; 62+ messages in thread
From: Jann Horn @ 2019-01-28 23:59 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Al Viro, linux-aio, linux-block, linux-man, Linux API, hch,
	jmoyer, Avi Kivity

On Tue, Jan 29, 2019 at 12:47 AM Jens Axboe <axboe@kernel.dk> wrote:
> On 1/28/19 3:32 PM, Jann Horn wrote:
> > On Mon, Jan 28, 2019 at 10:35 PM Jens Axboe <axboe@kernel.dk> wrote:
> >> The submission queue (SQ) and completion queue (CQ) rings are shared
> >> between the application and the kernel. This eliminates the need to
> >> copy data back and forth to submit and complete IO.
> >>
> >> IO submissions use the io_uring_sqe data structure, and completions
> >> are generated in the form of io_uring_sqe data structures. The SQ
> >> ring is an index into the io_uring_sqe array, which makes it possible
> >> to submit a batch of IOs without them being contiguous in the ring.
> >> The CQ ring is always contiguous, as completion events are inherently
> >> unordered, and hence any io_uring_cqe entry can point back to an
> >> arbitrary submission.
> >>
> >> Two new system calls are added for this:
> >>
> >> io_uring_setup(entries, params)
> >>         Sets up a context for doing async IO. On success, returns a file
> >>         descriptor that the application can mmap to gain access to the
> >>         SQ ring, CQ ring, and io_uring_sqes.
> >>
> >> io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
> >>         Initiates IO against the rings mapped to this fd, or waits for
> >>         them to complete, or both. The behavior is controlled by the
> >>         parameters passed in. If 'to_submit' is non-zero, then we'll
> >>         try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
> >>         kernel will wait for 'min_complete' events, if they aren't
> >>         already available. It's valid to set IORING_ENTER_GETEVENTS
> >>         and 'min_complete' == 0 at the same time, this allows the
> >>         kernel to return already completed events without waiting
> >>         for them. This is useful only for polling, as for IRQ
> >>         driven IO, the application can just check the CQ ring
> >>         without entering the kernel.
> >>
> >> With this setup, it's possible to do async IO with a single system
> >> call. Future developments will enable polled IO with this interface,
> >> and polled submission as well. The latter will enable an application
> >> to do IO without doing ANY system calls at all.
> >>
> >> For IRQ driven IO, an application only needs to enter the kernel for
> >> completions if it wants to wait for them to occur.
> >>
> >> Each io_uring is backed by a workqueue, to support buffered async IO
> >> as well. We will only punt to an async context if the command would
> >> need to wait for IO on the device side. Any data that can be accessed
> >> directly in the page cache is done inline. This avoids the slowness
> >> issue of usual threadpools, since cached data is accessed as quickly
> >> as a sync interface.
> >>
> >> Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
> > [...]
> >> +static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
> >> +                     bool force_nonblock)
> >> +{
> >> +       struct kiocb *kiocb = &req->rw;
> >> +       int ret;
> >> +
> >> +       kiocb->ki_filp = fget(sqe->fd);
> >> +       if (unlikely(!kiocb->ki_filp))
> >> +               return -EBADF;
> >> +       kiocb->ki_pos = sqe->off;
> >> +       kiocb->ki_flags = iocb_flags(kiocb->ki_filp);
> >> +       kiocb->ki_hint = ki_hint_validate(file_write_hint(kiocb->ki_filp));
> >> +       if (sqe->ioprio) {
> >> +               ret = ioprio_check_cap(sqe->ioprio);
> >> +               if (ret)
> >> +                       goto out_fput;
> >> +
> >> +               kiocb->ki_ioprio = sqe->ioprio;
> >> +       } else
> >> +               kiocb->ki_ioprio = get_current_ioprio();
> >> +
> >> +       ret = kiocb_set_rw_flags(kiocb, sqe->rw_flags);
> >> +       if (unlikely(ret))
> >> +               goto out_fput;
> >> +       if (force_nonblock) {
> >> +               kiocb->ki_flags |= IOCB_NOWAIT;
> >> +               req->flags |= REQ_F_FORCE_NONBLOCK;
> >> +       }
> >> +       if (kiocb->ki_flags & IOCB_HIPRI) {
> >> +               ret = -EINVAL;
> >> +               goto out_fput;
> >> +       }
> >> +
> >> +       kiocb->ki_complete = io_complete_rw;
> >> +       return 0;
> >> +out_fput:
> >> +       fput(kiocb->ki_filp);
> >> +       return ret;
> >> +}
> > [...]
> >> +static ssize_t io_read(struct io_kiocb *req, const struct io_uring_sqe *sqe,
> >> +                      bool force_nonblock)
> >> +{
> >> +       struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
> >> +       struct kiocb *kiocb = &req->rw;
> >> +       struct iov_iter iter;
> >> +       struct file *file;
> >> +       ssize_t ret;
> >> +
> >> +       ret = io_prep_rw(req, sqe, force_nonblock);
> >> +       if (ret)
> >> +               return ret;
> >> +       file = kiocb->ki_filp;
> >> +
> >> +       ret = -EBADF;
> >> +       if (unlikely(!(file->f_mode & FMODE_READ)))
> >> +               goto out_fput;
> >> +       ret = -EINVAL;
> >> +       if (unlikely(!file->f_op->read_iter))
> >> +               goto out_fput;
> >> +
> >> +       ret = io_import_iovec(req->ctx, READ, sqe, &iovec, &iter);
> >> +       if (ret)
> >> +               goto out_fput;
> >> +
> >> +       ret = rw_verify_area(READ, file, &kiocb->ki_pos, iov_iter_count(&iter));
> >> +       if (!ret) {
> >> +               ssize_t ret2;
> >> +
> >> +               /* Catch -EAGAIN return for forced non-blocking submission */
> >> +               ret2 = call_read_iter(file, kiocb, &iter);
> >> +               if (!force_nonblock || ret2 != -EAGAIN)
> >> +                       io_rw_done(kiocb, ret2);
> >> +               else
> >> +                       ret = -EAGAIN;
> >> +       }
> >> +       kfree(iovec);
> >> +out_fput:
> >> +       if (unlikely(ret))
> >> +               fput(file);
> >> +       return ret;
> >> +}
> > [...]
> >> +static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
> >> +                          struct sqe_submit *s, bool force_nonblock)
> >> +{
> >> +       const struct io_uring_sqe *sqe = s->sqe;
> >> +       ssize_t ret;
> >> +
> >> +       if (unlikely(s->index >= ctx->sq_entries))
> >> +               return -EINVAL;
> >> +       req->user_data = sqe->user_data;
> >> +
> >> +       ret = -EINVAL;
> >> +       switch (sqe->opcode) {
> >> +       case IORING_OP_NOP:
> >> +               ret = io_nop(req, sqe);
> >> +               break;
> >> +       case IORING_OP_READV:
> >> +               ret = io_read(req, sqe, force_nonblock);
> >> +               break;
> >> +       case IORING_OP_WRITEV:
> >> +               ret = io_write(req, sqe, force_nonblock);
> >> +               break;
> >> +       default:
> >> +               ret = -EINVAL;
> >> +               break;
> >> +       }
> >> +
> >> +       return ret;
> >> +}
> >> +
> >> +static void io_sq_wq_submit_work(struct work_struct *work)
> >> +{
> >> +       struct io_kiocb *req = container_of(work, struct io_kiocb, work);
> >> +       struct sqe_submit *s = &req->submit;
> >> +       u64 user_data = s->sqe->user_data;
> >> +       struct io_ring_ctx *ctx = req->ctx;
> >> +       mm_segment_t old_fs = get_fs();
> >> +       struct files_struct *old_files;
> >> +       int ret;
> >> +
> >> +        /* Ensure we clear previously set forced non-block flag */
> >> +       req->flags &= ~REQ_F_FORCE_NONBLOCK;
> >> +
> >> +       old_files = current->files;
> >> +       current->files = ctx->sqo_files;
> >
> > I think you're not supposed to twiddle with current->files without
> > holding task_lock(current).
>
> 'current' is the work queue item in this case, do we need to protect
> against anything else? I can add the locking around the assignments
> (both places).

Stuff like proc_fd_link() uses get_files_struct(), which grabs a
reference to your current files_struct protected only by task_lock();
and it doesn't use anything like READ_ONCE(), so even if the object
lifetime is not a problem, get_files_struct() could potentially crash
due to a double-read (reading task->files twice and assuming that the
result will be the same). As far as I can tell, this procfs code also
works on kernel threads.

> >> +       if (!mmget_not_zero(ctx->sqo_mm)) {
> >> +               ret = -EFAULT;
> >> +               goto err;
> >> +       }
> >> +
> >> +       use_mm(ctx->sqo_mm);
> >> +       set_fs(USER_DS);
> >> +
> >> +       ret = __io_submit_sqe(ctx, req, s, false);
> >> +
> >> +       set_fs(old_fs);
> >> +       unuse_mm(ctx->sqo_mm);
> >> +       mmput(ctx->sqo_mm);
> >> +err:
> >> +       if (ret) {
> >> +               io_cqring_add_event(ctx, user_data, ret, 0);
> >> +               io_free_req(req);
> >> +       }
> >> +       current->files = old_files;
> >> +}
> > [...]
> >> +static int io_sq_offload_start(struct io_ring_ctx *ctx)
> >> +{
> >> +       int ret;
> >> +
> >> +       ctx->sqo_mm = current->mm;
> >
> > What keeps this thing alive?
>
> I think we're deadling with the same thing as the files below, I'll
> defer to that.
>
> >> +       /*
> >> +        * This is safe since 'current' has the fd installed, and if that gets
> >> +        * closed on exit, then fops->release() is invoked which waits for the
> >> +        * async contexts to flush and exit before exiting.
> >> +        */
> >> +       ret = -EBADF;
> >> +       ctx->sqo_files = current->files;
> >> +       if (!ctx->sqo_files)
> >> +               goto err;
> >
> > That's gnarly. Adding Al Viro to the thread.
> >
> > I think you misunderstand the semantics of f_op->release. The ->flush
> > handler is invoked whenever a file descriptor is closed through
> > filp_close() (via deletion of the files_struct, sys_close(),
> > sys_dup2(), ...), so if you had used that one, _maybe_ this would
> > work. But the ->release handler only runs when the _last_ reference to
> > a struct file has been dropped - so you can, for example, fork() a
> > child, then exit() in the parent, and the ->release handler isn't
> > invoked. So I don't see how this can work.
>
> The anonfd is CLOEXEC. The idea is exactly that it only runs when the
> last reference to the file has been dropped. Not sure why you think I
> need ->flush() here?

Can't I just use fcntl(fd, F_SETFD, fd, 0) to clear the CLOEXEC flag?
Or send the fd via SCM_RIGHTS?

> > But even if you had abused ->flush for this instead: close_files()
> > currently has a comment in it that claims that "this is the last
> > reference to the files structure"; this change would make that claim
> > untrue.
>
> Let me see if I can explain my intent better than that comment... We
> know the parent who set up the io_uring instance will be around for as
> long as io_uring instance persists.

That's the part that I think is wrong: As far as I can tell, the
parent can go away and you won't notice.

Also, note that "the parent" is different things for ->files and ->mm.
You can have a multithreaded process whose threads don't have the same
->files, or multiple process that share ->files without sharing ->mm,
...

> When we are tearing down the
> io_uring, then we wait for any async contexts (like the one above) to
> exit. Once they are exited, it's safe to proceed with the exit and
> teardown ->files[].

But you only do that teardown on ->release, right? And ->release
doesn't have much to do with the process lifetime.

> That's the idea... Not trying to be clever, some of this dates back to
> the aio weirdness where it was impossible to have cross references like
> this, since it would lead to teardown deadlocks with how exit_aio() is
> used. I can probably grab a struct files reference above, but currently
> I don't see why it's needed.

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-28 23:59       ` Jann Horn
@ 2019-01-29  0:03         ` Jens Axboe
  2019-01-29  0:31           ` Jens Axboe
  2019-02-01 16:57         ` Matt Mullins
  1 sibling, 1 reply; 62+ messages in thread
From: Jens Axboe @ 2019-01-29  0:03 UTC (permalink / raw)
  To: Jann Horn
  Cc: Al Viro, linux-aio, linux-block, linux-man, Linux API, hch,
	jmoyer, Avi Kivity

On 1/28/19 4:59 PM, Jann Horn wrote:
> On Tue, Jan 29, 2019 at 12:47 AM Jens Axboe <axboe@kernel.dk> wrote:
>> On 1/28/19 3:32 PM, Jann Horn wrote:
>>> On Mon, Jan 28, 2019 at 10:35 PM Jens Axboe <axboe@kernel.dk> wrote:
>>>> The submission queue (SQ) and completion queue (CQ) rings are shared
>>>> between the application and the kernel. This eliminates the need to
>>>> copy data back and forth to submit and complete IO.
>>>>
>>>> IO submissions use the io_uring_sqe data structure, and completions
>>>> are generated in the form of io_uring_sqe data structures. The SQ
>>>> ring is an index into the io_uring_sqe array, which makes it possible
>>>> to submit a batch of IOs without them being contiguous in the ring.
>>>> The CQ ring is always contiguous, as completion events are inherently
>>>> unordered, and hence any io_uring_cqe entry can point back to an
>>>> arbitrary submission.
>>>>
>>>> Two new system calls are added for this:
>>>>
>>>> io_uring_setup(entries, params)
>>>>         Sets up a context for doing async IO. On success, returns a file
>>>>         descriptor that the application can mmap to gain access to the
>>>>         SQ ring, CQ ring, and io_uring_sqes.
>>>>
>>>> io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
>>>>         Initiates IO against the rings mapped to this fd, or waits for
>>>>         them to complete, or both. The behavior is controlled by the
>>>>         parameters passed in. If 'to_submit' is non-zero, then we'll
>>>>         try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
>>>>         kernel will wait for 'min_complete' events, if they aren't
>>>>         already available. It's valid to set IORING_ENTER_GETEVENTS
>>>>         and 'min_complete' == 0 at the same time, this allows the
>>>>         kernel to return already completed events without waiting
>>>>         for them. This is useful only for polling, as for IRQ
>>>>         driven IO, the application can just check the CQ ring
>>>>         without entering the kernel.
>>>>
>>>> With this setup, it's possible to do async IO with a single system
>>>> call. Future developments will enable polled IO with this interface,
>>>> and polled submission as well. The latter will enable an application
>>>> to do IO without doing ANY system calls at all.
>>>>
>>>> For IRQ driven IO, an application only needs to enter the kernel for
>>>> completions if it wants to wait for them to occur.
>>>>
>>>> Each io_uring is backed by a workqueue, to support buffered async IO
>>>> as well. We will only punt to an async context if the command would
>>>> need to wait for IO on the device side. Any data that can be accessed
>>>> directly in the page cache is done inline. This avoids the slowness
>>>> issue of usual threadpools, since cached data is accessed as quickly
>>>> as a sync interface.
>>>>
>>>> Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
>>> [...]
>>>> +static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
>>>> +                     bool force_nonblock)
>>>> +{
>>>> +       struct kiocb *kiocb = &req->rw;
>>>> +       int ret;
>>>> +
>>>> +       kiocb->ki_filp = fget(sqe->fd);
>>>> +       if (unlikely(!kiocb->ki_filp))
>>>> +               return -EBADF;
>>>> +       kiocb->ki_pos = sqe->off;
>>>> +       kiocb->ki_flags = iocb_flags(kiocb->ki_filp);
>>>> +       kiocb->ki_hint = ki_hint_validate(file_write_hint(kiocb->ki_filp));
>>>> +       if (sqe->ioprio) {
>>>> +               ret = ioprio_check_cap(sqe->ioprio);
>>>> +               if (ret)
>>>> +                       goto out_fput;
>>>> +
>>>> +               kiocb->ki_ioprio = sqe->ioprio;
>>>> +       } else
>>>> +               kiocb->ki_ioprio = get_current_ioprio();
>>>> +
>>>> +       ret = kiocb_set_rw_flags(kiocb, sqe->rw_flags);
>>>> +       if (unlikely(ret))
>>>> +               goto out_fput;
>>>> +       if (force_nonblock) {
>>>> +               kiocb->ki_flags |= IOCB_NOWAIT;
>>>> +               req->flags |= REQ_F_FORCE_NONBLOCK;
>>>> +       }
>>>> +       if (kiocb->ki_flags & IOCB_HIPRI) {
>>>> +               ret = -EINVAL;
>>>> +               goto out_fput;
>>>> +       }
>>>> +
>>>> +       kiocb->ki_complete = io_complete_rw;
>>>> +       return 0;
>>>> +out_fput:
>>>> +       fput(kiocb->ki_filp);
>>>> +       return ret;
>>>> +}
>>> [...]
>>>> +static ssize_t io_read(struct io_kiocb *req, const struct io_uring_sqe *sqe,
>>>> +                      bool force_nonblock)
>>>> +{
>>>> +       struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
>>>> +       struct kiocb *kiocb = &req->rw;
>>>> +       struct iov_iter iter;
>>>> +       struct file *file;
>>>> +       ssize_t ret;
>>>> +
>>>> +       ret = io_prep_rw(req, sqe, force_nonblock);
>>>> +       if (ret)
>>>> +               return ret;
>>>> +       file = kiocb->ki_filp;
>>>> +
>>>> +       ret = -EBADF;
>>>> +       if (unlikely(!(file->f_mode & FMODE_READ)))
>>>> +               goto out_fput;
>>>> +       ret = -EINVAL;
>>>> +       if (unlikely(!file->f_op->read_iter))
>>>> +               goto out_fput;
>>>> +
>>>> +       ret = io_import_iovec(req->ctx, READ, sqe, &iovec, &iter);
>>>> +       if (ret)
>>>> +               goto out_fput;
>>>> +
>>>> +       ret = rw_verify_area(READ, file, &kiocb->ki_pos, iov_iter_count(&iter));
>>>> +       if (!ret) {
>>>> +               ssize_t ret2;
>>>> +
>>>> +               /* Catch -EAGAIN return for forced non-blocking submission */
>>>> +               ret2 = call_read_iter(file, kiocb, &iter);
>>>> +               if (!force_nonblock || ret2 != -EAGAIN)
>>>> +                       io_rw_done(kiocb, ret2);
>>>> +               else
>>>> +                       ret = -EAGAIN;
>>>> +       }
>>>> +       kfree(iovec);
>>>> +out_fput:
>>>> +       if (unlikely(ret))
>>>> +               fput(file);
>>>> +       return ret;
>>>> +}
>>> [...]
>>>> +static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>>>> +                          struct sqe_submit *s, bool force_nonblock)
>>>> +{
>>>> +       const struct io_uring_sqe *sqe = s->sqe;
>>>> +       ssize_t ret;
>>>> +
>>>> +       if (unlikely(s->index >= ctx->sq_entries))
>>>> +               return -EINVAL;
>>>> +       req->user_data = sqe->user_data;
>>>> +
>>>> +       ret = -EINVAL;
>>>> +       switch (sqe->opcode) {
>>>> +       case IORING_OP_NOP:
>>>> +               ret = io_nop(req, sqe);
>>>> +               break;
>>>> +       case IORING_OP_READV:
>>>> +               ret = io_read(req, sqe, force_nonblock);
>>>> +               break;
>>>> +       case IORING_OP_WRITEV:
>>>> +               ret = io_write(req, sqe, force_nonblock);
>>>> +               break;
>>>> +       default:
>>>> +               ret = -EINVAL;
>>>> +               break;
>>>> +       }
>>>> +
>>>> +       return ret;
>>>> +}
>>>> +
>>>> +static void io_sq_wq_submit_work(struct work_struct *work)
>>>> +{
>>>> +       struct io_kiocb *req = container_of(work, struct io_kiocb, work);
>>>> +       struct sqe_submit *s = &req->submit;
>>>> +       u64 user_data = s->sqe->user_data;
>>>> +       struct io_ring_ctx *ctx = req->ctx;
>>>> +       mm_segment_t old_fs = get_fs();
>>>> +       struct files_struct *old_files;
>>>> +       int ret;
>>>> +
>>>> +        /* Ensure we clear previously set forced non-block flag */
>>>> +       req->flags &= ~REQ_F_FORCE_NONBLOCK;
>>>> +
>>>> +       old_files = current->files;
>>>> +       current->files = ctx->sqo_files;
>>>
>>> I think you're not supposed to twiddle with current->files without
>>> holding task_lock(current).
>>
>> 'current' is the work queue item in this case, do we need to protect
>> against anything else? I can add the locking around the assignments
>> (both places).
> 
> Stuff like proc_fd_link() uses get_files_struct(), which grabs a
> reference to your current files_struct protected only by task_lock();
> and it doesn't use anything like READ_ONCE(), so even if the object
> lifetime is not a problem, get_files_struct() could potentially crash
> due to a double-read (reading task->files twice and assuming that the
> result will be the same). As far as I can tell, this procfs code also
> works on kernel threads.

OK, that does make sense. I've added the locking.

>>>> +       if (!mmget_not_zero(ctx->sqo_mm)) {
>>>> +               ret = -EFAULT;
>>>> +               goto err;
>>>> +       }
>>>> +
>>>> +       use_mm(ctx->sqo_mm);
>>>> +       set_fs(USER_DS);
>>>> +
>>>> +       ret = __io_submit_sqe(ctx, req, s, false);
>>>> +
>>>> +       set_fs(old_fs);
>>>> +       unuse_mm(ctx->sqo_mm);
>>>> +       mmput(ctx->sqo_mm);
>>>> +err:
>>>> +       if (ret) {
>>>> +               io_cqring_add_event(ctx, user_data, ret, 0);
>>>> +               io_free_req(req);
>>>> +       }
>>>> +       current->files = old_files;
>>>> +}
>>> [...]
>>>> +static int io_sq_offload_start(struct io_ring_ctx *ctx)
>>>> +{
>>>> +       int ret;
>>>> +
>>>> +       ctx->sqo_mm = current->mm;
>>>
>>> What keeps this thing alive?
>>
>> I think we're deadling with the same thing as the files below, I'll
>> defer to that.
>>
>>>> +       /*
>>>> +        * This is safe since 'current' has the fd installed, and if that gets
>>>> +        * closed on exit, then fops->release() is invoked which waits for the
>>>> +        * async contexts to flush and exit before exiting.
>>>> +        */
>>>> +       ret = -EBADF;
>>>> +       ctx->sqo_files = current->files;
>>>> +       if (!ctx->sqo_files)
>>>> +               goto err;
>>>
>>> That's gnarly. Adding Al Viro to the thread.
>>>
>>> I think you misunderstand the semantics of f_op->release. The ->flush
>>> handler is invoked whenever a file descriptor is closed through
>>> filp_close() (via deletion of the files_struct, sys_close(),
>>> sys_dup2(), ...), so if you had used that one, _maybe_ this would
>>> work. But the ->release handler only runs when the _last_ reference to
>>> a struct file has been dropped - so you can, for example, fork() a
>>> child, then exit() in the parent, and the ->release handler isn't
>>> invoked. So I don't see how this can work.
>>
>> The anonfd is CLOEXEC. The idea is exactly that it only runs when the
>> last reference to the file has been dropped. Not sure why you think I
>> need ->flush() here?
> 
> Can't I just use fcntl(fd, F_SETFD, fd, 0) to clear the CLOEXEC flag?
> Or send the fd via SCM_RIGHTS?

That would obviously be a problem...

>>> But even if you had abused ->flush for this instead: close_files()
>>> currently has a comment in it that claims that "this is the last
>>> reference to the files structure"; this change would make that claim
>>> untrue.
>>
>> Let me see if I can explain my intent better than that comment... We
>> know the parent who set up the io_uring instance will be around for as
>> long as io_uring instance persists.
> 
> That's the part that I think is wrong: As far as I can tell, the
> parent can go away and you won't notice.

If that's the case, then the mm/files referencing needs to be looked
over for sure. It's currently relying on the fact that the parent stays
alive. If it can go away without ->release() being called, then we have
issues.

> Also, note that "the parent" is different things for ->files and ->mm.
> You can have a multithreaded process whose threads don't have the same
> ->files, or multiple process that share ->files without sharing ->mm,
> ...

Of course, I do realize that.

>> When we are tearing down the
>> io_uring, then we wait for any async contexts (like the one above) to
>> exit. Once they are exited, it's safe to proceed with the exit and
>> teardown ->files[].
> 
> But you only do that teardown on ->release, right? And ->release
> doesn't have much to do with the process lifetime.

Yes, only on ->relase().

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-29  0:03         ` Jens Axboe
@ 2019-01-29  0:31           ` Jens Axboe
  2019-01-29  0:34             ` Jann Horn
  0 siblings, 1 reply; 62+ messages in thread
From: Jens Axboe @ 2019-01-29  0:31 UTC (permalink / raw)
  To: Jann Horn
  Cc: Al Viro, linux-aio, linux-block, linux-man, Linux API, hch,
	jmoyer, Avi Kivity

On 1/28/19 5:03 PM, Jens Axboe wrote:
>> But you only do that teardown on ->release, right? And ->release
>> doesn't have much to do with the process lifetime.
> 
> Yes, only on ->relase().

OK, so I reworked the files struct to just grab it, then we ensure that
doesn't go away. For mm, it's a bit more tricky. I think the best
solution here is to add a fops->flush() and check for the process
exiting its files. If it does, we quiesce the async contexts and prevent
further use of that mm. We can't just keep holding a reference to the mm
like we do with the files.

That should solve both cases.

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-29  0:31           ` Jens Axboe
@ 2019-01-29  0:34             ` Jann Horn
  2019-01-29  0:55               ` Jens Axboe
  0 siblings, 1 reply; 62+ messages in thread
From: Jann Horn @ 2019-01-29  0:34 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Al Viro, linux-aio, linux-block, linux-man, Linux API, hch,
	jmoyer, Avi Kivity

On Tue, Jan 29, 2019 at 1:32 AM Jens Axboe <axboe@kernel.dk> wrote:
> On 1/28/19 5:03 PM, Jens Axboe wrote:
> >> But you only do that teardown on ->release, right? And ->release
> >> doesn't have much to do with the process lifetime.
> >
> > Yes, only on ->relase().
>
> OK, so I reworked the files struct to just grab it, then we ensure that
> doesn't go away. For mm, it's a bit more tricky. I think the best
> solution here is to add a fops->flush() and check for the process
> exiting its files. If it does, we quiesce the async contexts and prevent
> further use of that mm. We can't just keep holding a reference to the mm
> like we do with the files.
>
> That should solve both cases.

You still have to hold a reference on the mm though, I think (for
example, because two tasks might be sharing the fd table without
sharing the mm).

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 12/18] io_uring: add support for pre-mapped user IO buffers
  2019-01-28 23:50     ` Jens Axboe
@ 2019-01-29  0:36       ` Jann Horn
  2019-01-29  1:25         ` Jens Axboe
  0 siblings, 1 reply; 62+ messages in thread
From: Jann Horn @ 2019-01-29  0:36 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On Tue, Jan 29, 2019 at 12:50 AM Jens Axboe <axboe@kernel.dk> wrote:
> On 1/28/19 4:35 PM, Jann Horn wrote:
> > On Mon, Jan 28, 2019 at 10:36 PM Jens Axboe <axboe@kernel.dk> wrote:
> >> If we have fixed user buffers, we can map them into the kernel when we
> >> setup the io_context. That avoids the need to do get_user_pages() for
> >> each and every IO.
> > [...]
> >> +static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
> >> +                              void __user *arg, unsigned nr_args)
> >> +{
> >> +       int ret;
> >> +
> >> +       /* Drop our initial ref and wait for the ctx to be fully idle */
> >> +       percpu_ref_put(&ctx->refs);
> >
> > The line above drops a reference that you just got in the caller...
>
> Right
>
> >> +       percpu_ref_kill(&ctx->refs);
> >> +       wait_for_completion(&ctx->ctx_done);
> >> +
> >> +       switch (opcode) {
> >> +       case IORING_REGISTER_BUFFERS:
> >> +               ret = io_sqe_buffer_register(ctx, arg, nr_args);
> >> +               break;
> >> +       case IORING_UNREGISTER_BUFFERS:
> >> +               ret = -EINVAL;
> >> +               if (arg || nr_args)
> >> +                       break;
> >> +               ret = io_sqe_buffer_unregister(ctx);
> >> +               break;
> >> +       default:
> >> +               ret = -EINVAL;
> >> +               break;
> >> +       }
> >> +
> >> +       /* bring the ctx back to life */
> >> +       reinit_completion(&ctx->ctx_done);
> >> +       percpu_ref_resurrect(&ctx->refs);
> >> +       percpu_ref_get(&ctx->refs);
> >
> > And then this line takes a reference that the caller will immediately
> > drop again? Why?
>
> Just want to keep it symmetric and avoid having weird "this function drops
> a reference" use cases.
>
> >
> >> +       return ret;
> >> +}
> >> +
> >> +SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
> >> +               void __user *, arg, unsigned int, nr_args)
> >> +{
> >> +       struct io_ring_ctx *ctx;
> >> +       long ret = -EBADF;
> >> +       struct fd f;
> >> +
> >> +       f = fdget(fd);
> >> +       if (!f.file)
> >> +               return -EBADF;
> >> +
> >> +       ret = -EOPNOTSUPP;
> >> +       if (f.file->f_op != &io_uring_fops)
> >> +               goto out_fput;
> >> +
> >> +       ret = -ENXIO;
> >> +       ctx = f.file->private_data;
> >> +       if (!percpu_ref_tryget(&ctx->refs))
> >> +               goto out_fput;
> >
> > If you are holding the uring_lock of a ctx that can be accessed
> > through a file descriptor (which you do just after this point), you
> > know that the percpu_ref isn't zero, right? Why are you doing the
> > tryget here?
>
> Not sure I follow... We don't hold the lock at this point. I guess your
> point is that since the descriptor is open (or we'd fail the above
> check), then there's no point doing the tryget variant here? That's
> strictly true, that could just be a get().

As far as I can tell, you could do the following without breaking anything:

========================
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 6916dc3222cf..c2d82765eefe 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2485,7 +2485,6 @@ static int __io_uring_register(struct
io_ring_ctx *ctx, unsigned opcode,
        int ret;

        /* Drop our initial ref and wait for the ctx to be fully idle */
-       percpu_ref_put(&ctx->refs);
        percpu_ref_kill(&ctx->refs);
        wait_for_completion(&ctx->ctx_done);

@@ -2516,7 +2515,6 @@ static int __io_uring_register(struct
io_ring_ctx *ctx, unsigned opcode,
        /* bring the ctx back to life */
        reinit_completion(&ctx->ctx_done);
        percpu_ref_resurrect(&ctx->refs);
-       percpu_ref_get(&ctx->refs);
        return ret;
 }

@@ -2535,17 +2533,13 @@ SYSCALL_DEFINE4(io_uring_register, unsigned
int, fd, unsigned int, opcode,
        if (f.file->f_op != &io_uring_fops)
                goto out_fput;

-       ret = -ENXIO;
        ctx = f.file->private_data;
-       if (!percpu_ref_tryget(&ctx->refs))
-               goto out_fput;

        ret = -EBUSY;
        if (mutex_trylock(&ctx->uring_lock)) {
                ret = __io_uring_register(ctx, opcode, arg, nr_args);
                mutex_unlock(&ctx->uring_lock);
        }
-       io_ring_drop_ctx_refs(ctx, 1);
 out_fput:
        fdput(f);
        return ret;
========================

The two functions that can drop the initial ref of the percpu refcount are:

1. io_ring_ctx_wait_and_kill(); this is only used on ->release() or on
setup failure, meaning that as long as you have a reference to the
file from fget()/fdget(), io_ring_ctx_wait_and_kill() can't have been
called on your context
2. __io_uring_register(); this temporarily kills the percpu refcount
and resurrects it, all under ctx->uring_lock, meaning that as long as
you're holding ctx->uring_lock, __io_uring_register() can't have
killed the percpu refcount

Therefore, I think that as long as you're in sys_io_uring_register and
holding the ctx->uring_lock, you know that the percpu refcount is
alive, and bumping and dropping non-initial references has no effect.

Perhaps this makes more sense when you view the percpu refcount as a
read/write lock - percpu_ref_tryget() takes a read lock, the
percpu_ref_kill() dance takes a write lock.

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-29  0:34             ` Jann Horn
@ 2019-01-29  0:55               ` Jens Axboe
  2019-01-29  0:58                 ` Jann Horn
  0 siblings, 1 reply; 62+ messages in thread
From: Jens Axboe @ 2019-01-29  0:55 UTC (permalink / raw)
  To: Jann Horn
  Cc: Al Viro, linux-aio, linux-block, linux-man, Linux API, hch,
	jmoyer, Avi Kivity

On 1/28/19 5:34 PM, Jann Horn wrote:
> On Tue, Jan 29, 2019 at 1:32 AM Jens Axboe <axboe@kernel.dk> wrote:
>> On 1/28/19 5:03 PM, Jens Axboe wrote:
>>>> But you only do that teardown on ->release, right? And ->release
>>>> doesn't have much to do with the process lifetime.
>>>
>>> Yes, only on ->relase().
>>
>> OK, so I reworked the files struct to just grab it, then we ensure that
>> doesn't go away. For mm, it's a bit more tricky. I think the best
>> solution here is to add a fops->flush() and check for the process
>> exiting its files. If it does, we quiesce the async contexts and prevent
>> further use of that mm. We can't just keep holding a reference to the mm
>> like we do with the files.
>>
>> That should solve both cases.
> 
> You still have to hold a reference on the mm though, I think (for
> example, because two tasks might be sharing the fd table without
> sharing the mm).

Yes good point, except we can't hold a reference to it. But I think
we can get around this by using an mmu notifier instead. That eliminates
the need for ->flush() as well.

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-29  0:55               ` Jens Axboe
@ 2019-01-29  0:58                 ` Jann Horn
  2019-01-29  1:01                   ` Jens Axboe
  0 siblings, 1 reply; 62+ messages in thread
From: Jann Horn @ 2019-01-29  0:58 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Al Viro, linux-aio, linux-block, linux-man, Linux API, hch,
	jmoyer, Avi Kivity

On Tue, Jan 29, 2019 at 1:55 AM Jens Axboe <axboe@kernel.dk> wrote:
> On 1/28/19 5:34 PM, Jann Horn wrote:
> > On Tue, Jan 29, 2019 at 1:32 AM Jens Axboe <axboe@kernel.dk> wrote:
> >> On 1/28/19 5:03 PM, Jens Axboe wrote:
> >>>> But you only do that teardown on ->release, right? And ->release
> >>>> doesn't have much to do with the process lifetime.
> >>>
> >>> Yes, only on ->relase().
> >>
> >> OK, so I reworked the files struct to just grab it, then we ensure that
> >> doesn't go away. For mm, it's a bit more tricky. I think the best
> >> solution here is to add a fops->flush() and check for the process
> >> exiting its files. If it does, we quiesce the async contexts and prevent
> >> further use of that mm. We can't just keep holding a reference to the mm
> >> like we do with the files.
> >>
> >> That should solve both cases.
> >
> > You still have to hold a reference on the mm though, I think (for
> > example, because two tasks might be sharing the fd table without
> > sharing the mm).
>
> Yes good point, except we can't hold a reference to it.

Why not? kvm_create_vm() does it, too:

    mmgrab(current->mm);
    kvm->mm = current->mm;

> But I think
> we can get around this by using an mmu notifier instead. That eliminates
> the need for ->flush() as well.

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-29  0:58                 ` Jann Horn
@ 2019-01-29  1:01                   ` Jens Axboe
  0 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-29  1:01 UTC (permalink / raw)
  To: Jann Horn
  Cc: Al Viro, linux-aio, linux-block, linux-man, Linux API, hch,
	jmoyer, Avi Kivity

On 1/28/19 5:58 PM, Jann Horn wrote:
> On Tue, Jan 29, 2019 at 1:55 AM Jens Axboe <axboe@kernel.dk> wrote:
>> On 1/28/19 5:34 PM, Jann Horn wrote:
>>> On Tue, Jan 29, 2019 at 1:32 AM Jens Axboe <axboe@kernel.dk> wrote:
>>>> On 1/28/19 5:03 PM, Jens Axboe wrote:
>>>>>> But you only do that teardown on ->release, right? And ->release
>>>>>> doesn't have much to do with the process lifetime.
>>>>>
>>>>> Yes, only on ->relase().
>>>>
>>>> OK, so I reworked the files struct to just grab it, then we ensure that
>>>> doesn't go away. For mm, it's a bit more tricky. I think the best
>>>> solution here is to add a fops->flush() and check for the process
>>>> exiting its files. If it does, we quiesce the async contexts and prevent
>>>> further use of that mm. We can't just keep holding a reference to the mm
>>>> like we do with the files.
>>>>
>>>> That should solve both cases.
>>>
>>> You still have to hold a reference on the mm though, I think (for
>>> example, because two tasks might be sharing the fd table without
>>> sharing the mm).
>>
>> Yes good point, except we can't hold a reference to it.
> 
> Why not? kvm_create_vm() does it, too:
> 
>     mmgrab(current->mm);
>     kvm->mm = current->mm;

I missed that helper, was only looking at mmget(). But yeah, that'll do it!

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-28 21:35 ` [PATCH 05/18] Add io_uring IO interface Jens Axboe
  2019-01-28 21:53   ` Jeff Moyer
  2019-01-28 22:32   ` Jann Horn
@ 2019-01-29  1:07   ` Jann Horn
  2019-01-29  2:21     ` Jann Horn
  2019-01-29  2:21     ` Jens Axboe
  2019-01-29  1:29   ` Jann Horn
                     ` (2 subsequent siblings)
  5 siblings, 2 replies; 62+ messages in thread
From: Jann Horn @ 2019-01-29  1:07 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On Mon, Jan 28, 2019 at 10:35 PM Jens Axboe <axboe@kernel.dk> wrote:
> The submission queue (SQ) and completion queue (CQ) rings are shared
> between the application and the kernel. This eliminates the need to
> copy data back and forth to submit and complete IO.
[...]
> +static bool io_get_sqring(struct io_ring_ctx *ctx, struct sqe_submit *s)
> +{
> +       struct io_sq_ring *ring = ctx->sq_ring;
> +       unsigned head;
> +
> +       /*
> +        * The cached sq head (or cq tail) serves two purposes:
> +        *
> +        * 1) allows us to batch the cost of updating the user visible
> +        *    head updates.
> +        * 2) allows the kernel side to track the head on its own, even
> +        *    though the application is the one updating it.
> +        */
> +       head = ctx->cached_sq_head;
> +       smp_rmb();
> +       if (head == READ_ONCE(ring->r.tail))
> +               return false;
> +
> +       head = ring->array[head & ctx->sq_mask];
> +       if (head < ctx->sq_entries) {
> +               s->index = head;
> +               s->sqe = &ctx->sq_sqes[head];

ring->array can be mapped writable into userspace, right? If so: This
looks like a double-read issue; the compiler might assume that
ring->array is not modified concurrently and perform separate memory
accesses for the "if (head < ctx->sq_entries)" check and the
"&ctx->sq_sqes[head]" computation. Please use READ_ONCE()/WRITE_ONCE()
for all accesses to memory that userspace could concurrently modify in
a malicious way.

There have been some pretty severe security bugs caused by missing
READ_ONCE() annotations around accesses to shared memory; see, for
example, https://www.blackhat.com/docs/us-16/materials/us-16-Wilhelm-Xenpwn-Breaking-Paravirtualized-Devices.pdf
. Slides 35-48 show how the code "switch (op->cmd)", where "op" is a
pointer to shared memory, allowed an attacker to break out of a Xen
virtual machine because the compiler generated multiple memory
accesses.

> +               ctx->cached_sq_head++;
> +               return true;
> +       }
> +
> +       /* drop invalid entries */
> +       ctx->cached_sq_head++;
> +       ring->dropped++;
> +       smp_wmb();
> +       return false;
> +}
[...]
> +SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
> +               u32, min_complete, u32, flags, const sigset_t __user *, sig,
> +               size_t, sigsz)
> +{
> +       struct io_ring_ctx *ctx;
> +       long ret = -EBADF;
> +       struct fd f;
> +
> +       f = fdget(fd);
> +       if (!f.file)
> +               return -EBADF;
> +
> +       ret = -EOPNOTSUPP;
> +       if (f.file->f_op != &io_uring_fops)
> +               goto out_fput;

Oh, by the way: If you feel like it, maybe you could add a helper
fdget_typed(int fd, struct file_operations *f_op), or something like
that, so that there is less boilerplate code for first doing fdget(),
then checking ->f_op, and then coding an extra bailout path for that?
But that doesn't really have much to do with your patchset, feel free
to ignore this comment.

[...]
> +out_fput:
> +       fdput(f);
> +       return ret;
> +}

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 12/18] io_uring: add support for pre-mapped user IO buffers
  2019-01-29  0:36       ` Jann Horn
@ 2019-01-29  1:25         ` Jens Axboe
  0 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-29  1:25 UTC (permalink / raw)
  To: Jann Horn
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On 1/28/19 5:36 PM, Jann Horn wrote:
> On Tue, Jan 29, 2019 at 12:50 AM Jens Axboe <axboe@kernel.dk> wrote:
>> On 1/28/19 4:35 PM, Jann Horn wrote:
>>> On Mon, Jan 28, 2019 at 10:36 PM Jens Axboe <axboe@kernel.dk> wrote:
>>>> If we have fixed user buffers, we can map them into the kernel when we
>>>> setup the io_context. That avoids the need to do get_user_pages() for
>>>> each and every IO.
>>> [...]
>>>> +static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
>>>> +                              void __user *arg, unsigned nr_args)
>>>> +{
>>>> +       int ret;
>>>> +
>>>> +       /* Drop our initial ref and wait for the ctx to be fully idle */
>>>> +       percpu_ref_put(&ctx->refs);
>>>
>>> The line above drops a reference that you just got in the caller...
>>
>> Right
>>
>>>> +       percpu_ref_kill(&ctx->refs);
>>>> +       wait_for_completion(&ctx->ctx_done);
>>>> +
>>>> +       switch (opcode) {
>>>> +       case IORING_REGISTER_BUFFERS:
>>>> +               ret = io_sqe_buffer_register(ctx, arg, nr_args);
>>>> +               break;
>>>> +       case IORING_UNREGISTER_BUFFERS:
>>>> +               ret = -EINVAL;
>>>> +               if (arg || nr_args)
>>>> +                       break;
>>>> +               ret = io_sqe_buffer_unregister(ctx);
>>>> +               break;
>>>> +       default:
>>>> +               ret = -EINVAL;
>>>> +               break;
>>>> +       }
>>>> +
>>>> +       /* bring the ctx back to life */
>>>> +       reinit_completion(&ctx->ctx_done);
>>>> +       percpu_ref_resurrect(&ctx->refs);
>>>> +       percpu_ref_get(&ctx->refs);
>>>
>>> And then this line takes a reference that the caller will immediately
>>> drop again? Why?
>>
>> Just want to keep it symmetric and avoid having weird "this function drops
>> a reference" use cases.
>>
>>>
>>>> +       return ret;
>>>> +}
>>>> +
>>>> +SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
>>>> +               void __user *, arg, unsigned int, nr_args)
>>>> +{
>>>> +       struct io_ring_ctx *ctx;
>>>> +       long ret = -EBADF;
>>>> +       struct fd f;
>>>> +
>>>> +       f = fdget(fd);
>>>> +       if (!f.file)
>>>> +               return -EBADF;
>>>> +
>>>> +       ret = -EOPNOTSUPP;
>>>> +       if (f.file->f_op != &io_uring_fops)
>>>> +               goto out_fput;
>>>> +
>>>> +       ret = -ENXIO;
>>>> +       ctx = f.file->private_data;
>>>> +       if (!percpu_ref_tryget(&ctx->refs))
>>>> +               goto out_fput;
>>>
>>> If you are holding the uring_lock of a ctx that can be accessed
>>> through a file descriptor (which you do just after this point), you
>>> know that the percpu_ref isn't zero, right? Why are you doing the
>>> tryget here?
>>
>> Not sure I follow... We don't hold the lock at this point. I guess your
>> point is that since the descriptor is open (or we'd fail the above
>> check), then there's no point doing the tryget variant here? That's
>> strictly true, that could just be a get().
> 
> As far as I can tell, you could do the following without breaking anything:
> 
> ========================
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 6916dc3222cf..c2d82765eefe 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -2485,7 +2485,6 @@ static int __io_uring_register(struct
> io_ring_ctx *ctx, unsigned opcode,
>         int ret;
> 
>         /* Drop our initial ref and wait for the ctx to be fully idle */
> -       percpu_ref_put(&ctx->refs);
>         percpu_ref_kill(&ctx->refs);
>         wait_for_completion(&ctx->ctx_done);
> 
> @@ -2516,7 +2515,6 @@ static int __io_uring_register(struct
> io_ring_ctx *ctx, unsigned opcode,
>         /* bring the ctx back to life */
>         reinit_completion(&ctx->ctx_done);
>         percpu_ref_resurrect(&ctx->refs);
> -       percpu_ref_get(&ctx->refs);
>         return ret;
>  }
> 
> @@ -2535,17 +2533,13 @@ SYSCALL_DEFINE4(io_uring_register, unsigned
> int, fd, unsigned int, opcode,
>         if (f.file->f_op != &io_uring_fops)
>                 goto out_fput;
> 
> -       ret = -ENXIO;
>         ctx = f.file->private_data;
> -       if (!percpu_ref_tryget(&ctx->refs))
> -               goto out_fput;
> 
>         ret = -EBUSY;
>         if (mutex_trylock(&ctx->uring_lock)) {
>                 ret = __io_uring_register(ctx, opcode, arg, nr_args);
>                 mutex_unlock(&ctx->uring_lock);
>         }
> -       io_ring_drop_ctx_refs(ctx, 1);
>  out_fput:
>         fdput(f);
>         return ret;
> ========================
> 
> The two functions that can drop the initial ref of the percpu refcount are:
> 
> 1. io_ring_ctx_wait_and_kill(); this is only used on ->release() or on
> setup failure, meaning that as long as you have a reference to the
> file from fget()/fdget(), io_ring_ctx_wait_and_kill() can't have been
> called on your context
> 2. __io_uring_register(); this temporarily kills the percpu refcount
> and resurrects it, all under ctx->uring_lock, meaning that as long as
> you're holding ctx->uring_lock, __io_uring_register() can't have
> killed the percpu refcount
> 
> Therefore, I think that as long as you're in sys_io_uring_register and
> holding the ctx->uring_lock, you know that the percpu refcount is
> alive, and bumping and dropping non-initial references has no effect.
> 
> Perhaps this makes more sense when you view the percpu refcount as a
> read/write lock - percpu_ref_tryget() takes a read lock, the
> percpu_ref_kill() dance takes a write lock.

This looks good, I'll fold it in. Thanks!

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-28 21:35 ` [PATCH 05/18] Add io_uring IO interface Jens Axboe
                     ` (2 preceding siblings ...)
  2019-01-29  1:07   ` Jann Horn
@ 2019-01-29  1:29   ` Jann Horn
  2019-01-29  1:31     ` Jens Axboe
  2019-01-29  7:12   ` Bert Wesarg
  2019-01-29 12:12   ` Florian Weimer
  5 siblings, 1 reply; 62+ messages in thread
From: Jann Horn @ 2019-01-29  1:29 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On Mon, Jan 28, 2019 at 10:35 PM Jens Axboe <axboe@kernel.dk> wrote:
> The submission queue (SQ) and completion queue (CQ) rings are shared
> between the application and the kernel. This eliminates the need to
> copy data back and forth to submit and complete IO.
[...]
> +static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx)
> +{
> +       struct io_kiocb *req;
> +
> +       /* safe to use the non tryget, as we're inside ring ref already */
> +       percpu_ref_get(&ctx->refs);

Is that true? In the path io_sq_thread() -> io_submit_sqes() ->
io_submit_sqe() -> io_get_req(), I don't see anything that's already
holding a reference for you. Is the worker thread holding a reference
somewhere that I'm missing?

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-29  1:29   ` Jann Horn
@ 2019-01-29  1:31     ` Jens Axboe
  2019-01-29  1:32       ` Jann Horn
  0 siblings, 1 reply; 62+ messages in thread
From: Jens Axboe @ 2019-01-29  1:31 UTC (permalink / raw)
  To: Jann Horn
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On 1/28/19 6:29 PM, Jann Horn wrote:
> On Mon, Jan 28, 2019 at 10:35 PM Jens Axboe <axboe@kernel.dk> wrote:
>> The submission queue (SQ) and completion queue (CQ) rings are shared
>> between the application and the kernel. This eliminates the need to
>> copy data back and forth to submit and complete IO.
> [...]
>> +static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx)
>> +{
>> +       struct io_kiocb *req;
>> +
>> +       /* safe to use the non tryget, as we're inside ring ref already */
>> +       percpu_ref_get(&ctx->refs);
> 
> Is that true? In the path io_sq_thread() -> io_submit_sqes() ->
> io_submit_sqe() -> io_get_req(), I don't see anything that's already
> holding a reference for you. Is the worker thread holding a reference
> somewhere that I'm missing?

If the thread is alive, then the ctx is alive. Before we drop the last
ref to the ctx (and kill it), we wait for the thread to exit.

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-29  1:31     ` Jens Axboe
@ 2019-01-29  1:32       ` Jann Horn
  2019-01-29  2:23         ` Jens Axboe
  0 siblings, 1 reply; 62+ messages in thread
From: Jann Horn @ 2019-01-29  1:32 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On Tue, Jan 29, 2019 at 2:31 AM Jens Axboe <axboe@kernel.dk> wrote:
> On 1/28/19 6:29 PM, Jann Horn wrote:
> > On Mon, Jan 28, 2019 at 10:35 PM Jens Axboe <axboe@kernel.dk> wrote:
> >> The submission queue (SQ) and completion queue (CQ) rings are shared
> >> between the application and the kernel. This eliminates the need to
> >> copy data back and forth to submit and complete IO.
> > [...]
> >> +static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx)
> >> +{
> >> +       struct io_kiocb *req;
> >> +
> >> +       /* safe to use the non tryget, as we're inside ring ref already */
> >> +       percpu_ref_get(&ctx->refs);
> >
> > Is that true? In the path io_sq_thread() -> io_submit_sqes() ->
> > io_submit_sqe() -> io_get_req(), I don't see anything that's already
> > holding a reference for you. Is the worker thread holding a reference
> > somewhere that I'm missing?
>
> If the thread is alive, then the ctx is alive. Before we drop the last
> ref to the ctx (and kill it), we wait for the thread to exit.

Where in __io_uring_register() are you waiting for the thread to exit
before killing it? As far as I can tell, you come straight in from
syscall context, take a mutex, and do percpu_ref_kill().

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-29  1:07   ` Jann Horn
@ 2019-01-29  2:21     ` Jann Horn
  2019-01-29  2:54       ` Jens Axboe
  2019-01-29  3:46       ` Jens Axboe
  2019-01-29  2:21     ` Jens Axboe
  1 sibling, 2 replies; 62+ messages in thread
From: Jann Horn @ 2019-01-29  2:21 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On Tue, Jan 29, 2019 at 2:07 AM Jann Horn <jannh@google.com> wrote:
> On Mon, Jan 28, 2019 at 10:35 PM Jens Axboe <axboe@kernel.dk> wrote:
> > The submission queue (SQ) and completion queue (CQ) rings are shared
> > between the application and the kernel. This eliminates the need to
> > copy data back and forth to submit and complete IO.
> [...]
> > +static bool io_get_sqring(struct io_ring_ctx *ctx, struct sqe_submit *s)
> > +{
> > +       struct io_sq_ring *ring = ctx->sq_ring;
> > +       unsigned head;
> > +
> > +       /*
> > +        * The cached sq head (or cq tail) serves two purposes:
> > +        *
> > +        * 1) allows us to batch the cost of updating the user visible
> > +        *    head updates.
> > +        * 2) allows the kernel side to track the head on its own, even
> > +        *    though the application is the one updating it.
> > +        */
> > +       head = ctx->cached_sq_head;
> > +       smp_rmb();
> > +       if (head == READ_ONCE(ring->r.tail))
> > +               return false;
> > +
> > +       head = ring->array[head & ctx->sq_mask];
> > +       if (head < ctx->sq_entries) {
> > +               s->index = head;
> > +               s->sqe = &ctx->sq_sqes[head];
>
> ring->array can be mapped writable into userspace, right? If so: This
> looks like a double-read issue; the compiler might assume that
> ring->array is not modified concurrently and perform separate memory
> accesses for the "if (head < ctx->sq_entries)" check and the
> "&ctx->sq_sqes[head]" computation. Please use READ_ONCE()/WRITE_ONCE()
> for all accesses to memory that userspace could concurrently modify in
> a malicious way.
>
> There have been some pretty severe security bugs caused by missing
> READ_ONCE() annotations around accesses to shared memory; see, for
> example, https://www.blackhat.com/docs/us-16/materials/us-16-Wilhelm-Xenpwn-Breaking-Paravirtualized-Devices.pdf
> . Slides 35-48 show how the code "switch (op->cmd)", where "op" is a
> pointer to shared memory, allowed an attacker to break out of a Xen
> virtual machine because the compiler generated multiple memory
> accesses.

Oh, actually, it's even worse (comments with "//" added by me):

io_sq_thread() does this:

        do {
                // sqes[i].sqe is pointer to shared memory, result of
                // io_sqe_needs_user() is unreliable
                if (all_fixed && io_sqe_needs_user(sqes[i].sqe))
                        all_fixed = false;

                i++;
                if (i == ARRAY_SIZE(sqes))
                        break;
        } while (io_get_sqring(ctx, &sqes[i]));
        // sqes[...].sqe are pointers to shared memory

        io_commit_sqring(ctx);

        /* Unless all new commands are FIXED regions, grab mm */
        if (!all_fixed && !cur_mm) {
                mm_fault = !mmget_not_zero(ctx->sqo_mm);
                if (!mm_fault) {
                        use_mm(ctx->sqo_mm);
                        cur_mm = ctx->sqo_mm;
                }
        }

        inflight += io_submit_sqes(ctx, sqes, i, mm_fault);

Then the shared memory pointers go into io_submit_sqes():

static int io_submit_sqes(struct io_ring_ctx *ctx, struct sqe_submit *sqes,
                          unsigned int nr, bool mm_fault)
{
        struct io_submit_state state, *statep = NULL;
        int ret, i, submitted = 0;
        // sqes[...].sqe are pointers to shared memory
        [...]
        for (i = 0; i < nr; i++) {
                if (unlikely(mm_fault))
                        ret = -EFAULT;
                else
                        ret = io_submit_sqe(ctx, &sqes[i], statep);
                [...]
        }
        [...]
}

And on into io_submit_sqe():

static int io_submit_sqe(struct io_ring_ctx *ctx, struct sqe_submit *s,
                         struct io_submit_state *state)
{
        [...]
        ret = __io_submit_sqe(ctx, req, s, true, state);
        [...]
}

And there it gets interesting:

static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
                           struct sqe_submit *s, bool force_nonblock,
                           struct io_submit_state *state)
{
        // s->sqe is a pointer to shared memory
        const struct io_uring_sqe *sqe = s->sqe;
        // sqe is a pointer to shared memory
        ssize_t ret;

        if (unlikely(s->index >= ctx->sq_entries))
                return -EINVAL;
        req->user_data = sqe->user_data;

        ret = -EINVAL;
        // switch() on read from shared memory, potential instruction pointer
        // control
        switch (sqe->opcode) {
        [...]
        case IORING_OP_READV:
                if (unlikely(sqe->buf_index))
                        return -EINVAL;
                ret = io_read(req, sqe, force_nonblock, state);
                break;
        [...]
        case IORING_OP_READ_FIXED:
                ret = io_read(req, sqe, force_nonblock, state);
                break;
        [...]
        }
        [...]
}

On into io_read():

static ssize_t io_read(struct io_kiocb *req, const struct io_uring_sqe *sqe,
                       bool force_nonblock, struct io_submit_state *state)
{
[...]
        // sqe is a pointer to shared memory
        ret = io_prep_rw(req, sqe, force_nonblock, state);
        [...]
}

And then io_prep_rw() does multiple reads even in the source code:

static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
                      bool force_nonblock, struct io_submit_state *state)
{
        struct io_ring_ctx *ctx = req->ctx;
        struct kiocb *kiocb = &req->rw;
        int ret;

        // sqe is a pointer to shared memory

        // double-read of sqe->flags, see end of function
        if (sqe->flags & IOSQE_FIXED_FILE) {
                // double-read of sqe->fd for the bounds check and the
array access, potential OOB pointer read
                if (unlikely(!ctx->user_files || sqe->fd >= ctx->nr_user_files))
                        return -EBADF;
                kiocb->ki_filp = ctx->user_files[sqe->fd];
                req->flags |= REQ_F_FIXED_FILE;
        } else {
                kiocb->ki_filp = io_file_get(state, sqe->fd);
        }
        if (unlikely(!kiocb->ki_filp))
                return -EBADF;
        kiocb->ki_pos = sqe->off;
        kiocb->ki_flags = iocb_flags(kiocb->ki_filp);
        kiocb->ki_hint = ki_hint_validate(file_write_hint(kiocb->ki_filp));
        // three reads of sqe->ioprio, bypassable capability check
        if (sqe->ioprio) {
                ret = ioprio_check_cap(sqe->ioprio);
                if (ret)
                        goto out_fput;

                kiocb->ki_ioprio = sqe->ioprio;
        } else
                kiocb->ki_ioprio = get_current_ioprio();
        [...]
        return 0;
out_fput:
        // double-read of sqe->flags, changed value can lead to
unbalanced refcount
        if (!(sqe->flags & IOSQE_FIXED_FILE))
                io_file_put(state, kiocb->ki_filp);
        return ret;
}

Please create a local copy of the request before parsing it to keep
the data from changing under you. Additionally, it might make sense to
annotate every pointer to shared memory with a comment, or something
like that, to ensure that anyone looking at the code can immediately
see for which pointers special caution is required on access.

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-29  1:07   ` Jann Horn
  2019-01-29  2:21     ` Jann Horn
@ 2019-01-29  2:21     ` Jens Axboe
  1 sibling, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-29  2:21 UTC (permalink / raw)
  To: Jann Horn
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On 1/28/19 6:07 PM, Jann Horn wrote:
> On Mon, Jan 28, 2019 at 10:35 PM Jens Axboe <axboe@kernel.dk> wrote:
>> The submission queue (SQ) and completion queue (CQ) rings are shared
>> between the application and the kernel. This eliminates the need to
>> copy data back and forth to submit and complete IO.
> [...]
>> +static bool io_get_sqring(struct io_ring_ctx *ctx, struct sqe_submit *s)
>> +{
>> +       struct io_sq_ring *ring = ctx->sq_ring;
>> +       unsigned head;
>> +
>> +       /*
>> +        * The cached sq head (or cq tail) serves two purposes:
>> +        *
>> +        * 1) allows us to batch the cost of updating the user visible
>> +        *    head updates.
>> +        * 2) allows the kernel side to track the head on its own, even
>> +        *    though the application is the one updating it.
>> +        */
>> +       head = ctx->cached_sq_head;
>> +       smp_rmb();
>> +       if (head == READ_ONCE(ring->r.tail))
>> +               return false;
>> +
>> +       head = ring->array[head & ctx->sq_mask];
>> +       if (head < ctx->sq_entries) {
>> +               s->index = head;
>> +               s->sqe = &ctx->sq_sqes[head];
> 
> ring->array can be mapped writable into userspace, right? If so: This
> looks like a double-read issue; the compiler might assume that
> ring->array is not modified concurrently and perform separate memory
> accesses for the "if (head < ctx->sq_entries)" check and the
> "&ctx->sq_sqes[head]" computation. Please use READ_ONCE()/WRITE_ONCE()
> for all accesses to memory that userspace could concurrently modify in
> a malicious way.
> 
> There have been some pretty severe security bugs caused by missing
> READ_ONCE() annotations around accesses to shared memory; see, for
> example, https://www.blackhat.com/docs/us-16/materials/us-16-Wilhelm-Xenpwn-Breaking-Paravirtualized-Devices.pdf
> . Slides 35-48 show how the code "switch (op->cmd)", where "op" is a
> pointer to shared memory, allowed an attacker to break out of a Xen
> virtual machine because the compiler generated multiple memory
> accesses.

Thanks, I'll update these to use READ/WRITE_ONCE.

>> +               ctx->cached_sq_head++;
>> +               return true;
>> +       }
>> +
>> +       /* drop invalid entries */
>> +       ctx->cached_sq_head++;
>> +       ring->dropped++;
>> +       smp_wmb();
>> +       return false;
>> +}
> [...]
>> +SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
>> +               u32, min_complete, u32, flags, const sigset_t __user *, sig,
>> +               size_t, sigsz)
>> +{
>> +       struct io_ring_ctx *ctx;
>> +       long ret = -EBADF;
>> +       struct fd f;
>> +
>> +       f = fdget(fd);
>> +       if (!f.file)
>> +               return -EBADF;
>> +
>> +       ret = -EOPNOTSUPP;
>> +       if (f.file->f_op != &io_uring_fops)
>> +               goto out_fput;
> 
> Oh, by the way: If you feel like it, maybe you could add a helper
> fdget_typed(int fd, struct file_operations *f_op), or something like
> that, so that there is less boilerplate code for first doing fdget(),
> then checking ->f_op, and then coding an extra bailout path for that?
> But that doesn't really have much to do with your patchset, feel free
> to ignore this comment.

That's not a bad idea, I think this is a fairly common code pattern.
I'll look into it.

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-29  1:32       ` Jann Horn
@ 2019-01-29  2:23         ` Jens Axboe
  0 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-29  2:23 UTC (permalink / raw)
  To: Jann Horn
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On 1/28/19 6:32 PM, Jann Horn wrote:
> On Tue, Jan 29, 2019 at 2:31 AM Jens Axboe <axboe@kernel.dk> wrote:
>> On 1/28/19 6:29 PM, Jann Horn wrote:
>>> On Mon, Jan 28, 2019 at 10:35 PM Jens Axboe <axboe@kernel.dk> wrote:
>>>> The submission queue (SQ) and completion queue (CQ) rings are shared
>>>> between the application and the kernel. This eliminates the need to
>>>> copy data back and forth to submit and complete IO.
>>> [...]
>>>> +static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx)
>>>> +{
>>>> +       struct io_kiocb *req;
>>>> +
>>>> +       /* safe to use the non tryget, as we're inside ring ref already */
>>>> +       percpu_ref_get(&ctx->refs);
>>>
>>> Is that true? In the path io_sq_thread() -> io_submit_sqes() ->
>>> io_submit_sqe() -> io_get_req(), I don't see anything that's already
>>> holding a reference for you. Is the worker thread holding a reference
>>> somewhere that I'm missing?
>>
>> If the thread is alive, then the ctx is alive. Before we drop the last
>> ref to the ctx (and kill it), we wait for the thread to exit.
> 
> Where in __io_uring_register() are you waiting for the thread to exit
> before killing it? As far as I can tell, you come straight in from
> syscall context, take a mutex, and do percpu_ref_kill().

I was just referring to the regular shutdown path. You are right, for
the later io_uring_register() more care needs to be taken. I'll just
switch to the tryget, it's not like the non-try is a big cycle saver
by any stretch.

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-29  2:21     ` Jann Horn
@ 2019-01-29  2:54       ` Jens Axboe
  2019-01-29  3:46       ` Jens Axboe
  1 sibling, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-29  2:54 UTC (permalink / raw)
  To: Jann Horn
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On 1/28/19 7:21 PM, Jann Horn wrote:
> On Tue, Jan 29, 2019 at 2:07 AM Jann Horn <jannh@google.com> wrote:
>> On Mon, Jan 28, 2019 at 10:35 PM Jens Axboe <axboe@kernel.dk> wrote:
>>> The submission queue (SQ) and completion queue (CQ) rings are shared
>>> between the application and the kernel. This eliminates the need to
>>> copy data back and forth to submit and complete IO.
>> [...]
>>> +static bool io_get_sqring(struct io_ring_ctx *ctx, struct sqe_submit *s)
>>> +{
>>> +       struct io_sq_ring *ring = ctx->sq_ring;
>>> +       unsigned head;
>>> +
>>> +       /*
>>> +        * The cached sq head (or cq tail) serves two purposes:
>>> +        *
>>> +        * 1) allows us to batch the cost of updating the user visible
>>> +        *    head updates.
>>> +        * 2) allows the kernel side to track the head on its own, even
>>> +        *    though the application is the one updating it.
>>> +        */
>>> +       head = ctx->cached_sq_head;
>>> +       smp_rmb();
>>> +       if (head == READ_ONCE(ring->r.tail))
>>> +               return false;
>>> +
>>> +       head = ring->array[head & ctx->sq_mask];
>>> +       if (head < ctx->sq_entries) {
>>> +               s->index = head;
>>> +               s->sqe = &ctx->sq_sqes[head];
>>
>> ring->array can be mapped writable into userspace, right? If so: This
>> looks like a double-read issue; the compiler might assume that
>> ring->array is not modified concurrently and perform separate memory
>> accesses for the "if (head < ctx->sq_entries)" check and the
>> "&ctx->sq_sqes[head]" computation. Please use READ_ONCE()/WRITE_ONCE()
>> for all accesses to memory that userspace could concurrently modify in
>> a malicious way.
>>
>> There have been some pretty severe security bugs caused by missing
>> READ_ONCE() annotations around accesses to shared memory; see, for
>> example, https://www.blackhat.com/docs/us-16/materials/us-16-Wilhelm-Xenpwn-Breaking-Paravirtualized-Devices.pdf
>> . Slides 35-48 show how the code "switch (op->cmd)", where "op" is a
>> pointer to shared memory, allowed an attacker to break out of a Xen
>> virtual machine because the compiler generated multiple memory
>> accesses.
> 
> Oh, actually, it's even worse (comments with "//" added by me):
> 
> io_sq_thread() does this:
> 
>         do {
>                 // sqes[i].sqe is pointer to shared memory, result of
>                 // io_sqe_needs_user() is unreliable
>                 if (all_fixed && io_sqe_needs_user(sqes[i].sqe))
>                         all_fixed = false;
> 
>                 i++;
>                 if (i == ARRAY_SIZE(sqes))
>                         break;
>         } while (io_get_sqring(ctx, &sqes[i]));
>         // sqes[...].sqe are pointers to shared memory
> 
>         io_commit_sqring(ctx);
> 
>         /* Unless all new commands are FIXED regions, grab mm */
>         if (!all_fixed && !cur_mm) {
>                 mm_fault = !mmget_not_zero(ctx->sqo_mm);
>                 if (!mm_fault) {
>                         use_mm(ctx->sqo_mm);
>                         cur_mm = ctx->sqo_mm;
>                 }
>         }
> 
>         inflight += io_submit_sqes(ctx, sqes, i, mm_fault);
> 
> Then the shared memory pointers go into io_submit_sqes():
> 
> static int io_submit_sqes(struct io_ring_ctx *ctx, struct sqe_submit *sqes,
>                           unsigned int nr, bool mm_fault)
> {
>         struct io_submit_state state, *statep = NULL;
>         int ret, i, submitted = 0;
>         // sqes[...].sqe are pointers to shared memory
>         [...]
>         for (i = 0; i < nr; i++) {
>                 if (unlikely(mm_fault))
>                         ret = -EFAULT;
>                 else
>                         ret = io_submit_sqe(ctx, &sqes[i], statep);
>                 [...]
>         }
>         [...]
> }
> 
> And on into io_submit_sqe():
> 
> static int io_submit_sqe(struct io_ring_ctx *ctx, struct sqe_submit *s,
>                          struct io_submit_state *state)
> {
>         [...]
>         ret = __io_submit_sqe(ctx, req, s, true, state);
>         [...]
> }
> 
> And there it gets interesting:
> 
> static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>                            struct sqe_submit *s, bool force_nonblock,
>                            struct io_submit_state *state)
> {
>         // s->sqe is a pointer to shared memory
>         const struct io_uring_sqe *sqe = s->sqe;
>         // sqe is a pointer to shared memory
>         ssize_t ret;
> 
>         if (unlikely(s->index >= ctx->sq_entries))
>                 return -EINVAL;
>         req->user_data = sqe->user_data;
> 
>         ret = -EINVAL;
>         // switch() on read from shared memory, potential instruction pointer
>         // control
>         switch (sqe->opcode) {
>         [...]
>         case IORING_OP_READV:
>                 if (unlikely(sqe->buf_index))
>                         return -EINVAL;
>                 ret = io_read(req, sqe, force_nonblock, state);
>                 break;
>         [...]
>         case IORING_OP_READ_FIXED:
>                 ret = io_read(req, sqe, force_nonblock, state);
>                 break;
>         [...]
>         }
>         [...]
> }
> 
> On into io_read():
> 
> static ssize_t io_read(struct io_kiocb *req, const struct io_uring_sqe *sqe,
>                        bool force_nonblock, struct io_submit_state *state)
> {
> [...]
>         // sqe is a pointer to shared memory
>         ret = io_prep_rw(req, sqe, force_nonblock, state);
>         [...]
> }
> 
> And then io_prep_rw() does multiple reads even in the source code:
> 
> static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
>                       bool force_nonblock, struct io_submit_state *state)
> {
>         struct io_ring_ctx *ctx = req->ctx;
>         struct kiocb *kiocb = &req->rw;
>         int ret;
> 
>         // sqe is a pointer to shared memory
> 
>         // double-read of sqe->flags, see end of function
>         if (sqe->flags & IOSQE_FIXED_FILE) {
>                 // double-read of sqe->fd for the bounds check and the
> array access, potential OOB pointer read
>                 if (unlikely(!ctx->user_files || sqe->fd >= ctx->nr_user_files))
>                         return -EBADF;
>                 kiocb->ki_filp = ctx->user_files[sqe->fd];
>                 req->flags |= REQ_F_FIXED_FILE;
>         } else {
>                 kiocb->ki_filp = io_file_get(state, sqe->fd);
>         }
>         if (unlikely(!kiocb->ki_filp))
>                 return -EBADF;
>         kiocb->ki_pos = sqe->off;
>         kiocb->ki_flags = iocb_flags(kiocb->ki_filp);
>         kiocb->ki_hint = ki_hint_validate(file_write_hint(kiocb->ki_filp));
>         // three reads of sqe->ioprio, bypassable capability check
>         if (sqe->ioprio) {
>                 ret = ioprio_check_cap(sqe->ioprio);
>                 if (ret)
>                         goto out_fput;
> 
>                 kiocb->ki_ioprio = sqe->ioprio;
>         } else
>                 kiocb->ki_ioprio = get_current_ioprio();
>         [...]
>         return 0;
> out_fput:
>         // double-read of sqe->flags, changed value can lead to
> unbalanced refcount
>         if (!(sqe->flags & IOSQE_FIXED_FILE))
>                 io_file_put(state, kiocb->ki_filp);
>         return ret;
> }
> 
> Please create a local copy of the request before parsing it to keep
> the data from changing under you. Additionally, it might make sense to
> annotate every pointer to shared memory with a comment, or something
> like that, to ensure that anyone looking at the code can immediately
> see for which pointers special caution is required on access.

Ugh, that's pretty dire. But good catch, I'll fix that up so the
application changing sqe malicously won't affect the kernel. I hope we
can get away with NOT copying the whole sqe, but we'll do that if we
have to.

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-29  2:21     ` Jann Horn
  2019-01-29  2:54       ` Jens Axboe
@ 2019-01-29  3:46       ` Jens Axboe
  2019-01-29 15:56         ` Jann Horn
  1 sibling, 1 reply; 62+ messages in thread
From: Jens Axboe @ 2019-01-29  3:46 UTC (permalink / raw)
  To: Jann Horn
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On 1/28/19 7:21 PM, Jann Horn wrote:
> Please create a local copy of the request before parsing it to keep
> the data from changing under you. Additionally, it might make sense to
> annotate every pointer to shared memory with a comment, or something
> like that, to ensure that anyone looking at the code can immediately
> see for which pointers special caution is required on access.

I took a look at the viability of NOT having to local copy the data, and
I don't think it's too bad. Local copy has a noticeable impact on the
performance, hence I'd really (REALLY) like to avoid it.

Here's something on top of the current git branch. I think I even went a
bit too far in some areas, but it should hopefully catch the cases where
we might end up double evaluating the parts of the sqe that we depend
on. For most of the sqe reading we don't really care too much. For
instance, the sqe->user_data. If the app changes this field, then it
just gets whatever passed back in cqe->user_data. That's not a kernel
issue.

For cases like addr/len etc validation, it should be sound. I'll double
check this in the morning as well, and obviously would need to be folded
in along the way.

I'd appreciate your opinion on this part, if you see any major issues
with it, or if I missed something.


diff --git a/fs/io_uring.c b/fs/io_uring.c
index e8760ad02e82..64d090300990 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -668,31 +668,35 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 {
 	struct io_ring_ctx *ctx = req->ctx;
 	struct kiocb *kiocb = &req->rw;
-	int ret;
+	unsigned flags, ioprio;
+	int fd, ret;
 
-	if (sqe->flags & IOSQE_FIXED_FILE) {
-		if (unlikely(!ctx->user_files || sqe->fd >= ctx->nr_user_files))
+	flags = READ_ONCE(sqe->flags);
+	fd = READ_ONCE(sqe->fd);
+	if (flags & IOSQE_FIXED_FILE) {
+		if (unlikely(!ctx->user_files || fd >= ctx->nr_user_files))
 			return -EBADF;
-		kiocb->ki_filp = ctx->user_files[sqe->fd];
+		kiocb->ki_filp = ctx->user_files[fd];
 		req->flags |= REQ_F_FIXED_FILE;
 	} else {
-		kiocb->ki_filp = io_file_get(state, sqe->fd);
+		kiocb->ki_filp = io_file_get(state, fd);
 	}
 	if (unlikely(!kiocb->ki_filp))
 		return -EBADF;
-	kiocb->ki_pos = sqe->off;
+	kiocb->ki_pos = READ_ONCE(sqe->off);
 	kiocb->ki_flags = iocb_flags(kiocb->ki_filp);
 	kiocb->ki_hint = ki_hint_validate(file_write_hint(kiocb->ki_filp));
-	if (sqe->ioprio) {
-		ret = ioprio_check_cap(sqe->ioprio);
+	ioprio = READ_ONCE(sqe->ioprio);
+	if (ioprio) {
+		ret = ioprio_check_cap(ioprio);
 		if (ret)
 			goto out_fput;
 
-		kiocb->ki_ioprio = sqe->ioprio;
+		kiocb->ki_ioprio = ioprio;
 	} else
 		kiocb->ki_ioprio = get_current_ioprio();
 
-	ret = kiocb_set_rw_flags(kiocb, sqe->rw_flags);
+	ret = kiocb_set_rw_flags(kiocb, READ_ONCE(sqe->rw_flags));
 	if (unlikely(ret))
 		goto out_fput;
 	if (force_nonblock) {
@@ -716,7 +720,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	}
 	return 0;
 out_fput:
-	if (!(sqe->flags & IOSQE_FIXED_FILE))
+	if (!(flags & IOSQE_FIXED_FILE))
 		io_file_put(state, kiocb->ki_filp);
 	return ret;
 }
@@ -746,28 +750,31 @@ static int io_import_fixed(struct io_ring_ctx *ctx, int rw,
 			   const struct io_uring_sqe *sqe,
 			   struct iov_iter *iter)
 {
+	size_t len = READ_ONCE(sqe->len);
 	struct io_mapped_ubuf *imu;
-	size_t len = sqe->len;
+	int buf_index, index;
 	size_t offset;
-	int index;
+	u64 buf_addr;
 
 	/* attempt to use fixed buffers without having provided iovecs */
 	if (unlikely(!ctx->user_bufs))
 		return -EFAULT;
-	if (unlikely(sqe->buf_index >= ctx->nr_user_bufs))
+
+	buf_index = READ_ONCE(sqe->buf_index);
+	if (unlikely(buf_index >= ctx->nr_user_bufs))
 		return -EFAULT;
 
-	index = array_index_nospec(sqe->buf_index, ctx->sq_entries);
+	index = array_index_nospec(buf_index, ctx->sq_entries);
 	imu = &ctx->user_bufs[index];
-	if ((unsigned long) sqe->addr < imu->ubuf ||
-	    (unsigned long) sqe->addr + len > imu->ubuf + imu->len)
+	buf_addr = READ_ONCE(sqe->addr);
+	if (buf_addr < imu->ubuf || buf_addr + len > imu->ubuf + imu->len)
 		return -EFAULT;
 
 	/*
 	 * May not be a start of buffer, set size appropriately
 	 * and advance us to the beginning.
 	 */
-	offset = (unsigned long) sqe->addr - imu->ubuf;
+	offset = buf_addr - imu->ubuf;
 	iov_iter_bvec(iter, rw, imu->bvec, imu->nr_bvecs, offset + len);
 	if (offset)
 		iov_iter_advance(iter, offset);
@@ -778,10 +785,12 @@ static int io_import_iovec(struct io_ring_ctx *ctx, int rw,
 			   const struct io_uring_sqe *sqe,
 			   struct iovec **iovec, struct iov_iter *iter)
 {
-	void __user *buf = u64_to_user_ptr(sqe->addr);
+	void __user *buf = u64_to_user_ptr(READ_ONCE(sqe->addr));
+	int opcode;
 
-	if (sqe->opcode == IORING_OP_READ_FIXED ||
-	    sqe->opcode == IORING_OP_WRITE_FIXED) {
+	opcode = READ_ONCE(sqe->opcode);
+	if (opcode == IORING_OP_READ_FIXED ||
+	    opcode == IORING_OP_WRITE_FIXED) {
 		ssize_t ret = io_import_fixed(ctx, rw, sqe, iter);
 		*iovec = NULL;
 		return ret;
@@ -789,11 +798,12 @@ static int io_import_iovec(struct io_ring_ctx *ctx, int rw,
 
 #ifdef CONFIG_COMPAT
 	if (in_compat_syscall())
-		return compat_import_iovec(rw, buf, sqe->len, UIO_FASTIOV,
-						iovec, iter);
+		return compat_import_iovec(rw, buf, READ_ONCE(sqe->len),
+						UIO_FASTIOV, iovec, iter);
 #endif
 
-	return import_iovec(rw, buf, sqe->len, UIO_FASTIOV, iovec, iter);
+	return import_iovec(rw, buf, READ_ONCE(sqe->len), UIO_FASTIOV, iovec,
+				iter);
 }
 
 static void io_async_list_note(int rw, struct io_kiocb *req, size_t len)
@@ -939,14 +949,14 @@ static ssize_t io_write(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 /*
  * IORING_OP_NOP just posts a completion event, nothing else.
  */
-static int io_nop(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+static int io_nop(struct io_kiocb *req, u64 user_data)
 {
 	struct io_ring_ctx *ctx = req->ctx;
 
 	if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
 		return -EINVAL;
 
-	io_cqring_add_event(ctx, sqe->user_data, 0, 0);
+	io_cqring_add_event(ctx, user_data, 0, 0);
 	io_free_req(req);
 	return 0;
 }
@@ -955,9 +965,12 @@ static int io_fsync(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 		    bool force_nonblock)
 {
 	struct io_ring_ctx *ctx = req->ctx;
-	loff_t end = sqe->off + sqe->len;
+	loff_t sqe_off = READ_ONCE(sqe->off);
+	loff_t sqe_len = READ_ONCE(sqe->len);
+	loff_t end = sqe_off + sqe_len;
 	struct file *file;
-	int ret;
+	unsigned flags;
+	int ret, fd;
 
 	/* fsync always requires a blocking context */
 	if (force_nonblock)
@@ -970,21 +983,23 @@ static int io_fsync(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	if (unlikely(sqe->fsync_flags & ~IORING_FSYNC_DATASYNC))
 		return -EINVAL;
 
-	if (sqe->flags & IOSQE_FIXED_FILE) {
-		if (unlikely(!ctx->user_files || sqe->fd >= ctx->nr_user_files))
+	fd = READ_ONCE(sqe->fd);
+	flags = READ_ONCE(sqe->flags);
+	if (flags & IOSQE_FIXED_FILE) {
+		if (unlikely(!ctx->user_files || fd >= ctx->nr_user_files))
 			return -EBADF;
-		file = ctx->user_files[sqe->fd];
+		file = ctx->user_files[fd];
 	} else {
-		file = fget(sqe->fd);
+		file = fget(fd);
 	}
 
 	if (unlikely(!file))
 		return -EBADF;
 
-	ret = vfs_fsync_range(file, sqe->off, end > 0 ? end : LLONG_MAX,
+	ret = vfs_fsync_range(file, sqe_off, end > 0 ? end : LLONG_MAX,
 			sqe->fsync_flags & IORING_FSYNC_DATASYNC);
 
-	if (!(sqe->flags & IOSQE_FIXED_FILE))
+	if (!(flags & IOSQE_FIXED_FILE))
 		fput(file);
 
 	io_cqring_add_event(ctx, sqe->user_data, ret, 0);
@@ -1037,7 +1052,7 @@ static int io_poll_remove(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 
 	spin_lock_irq(&ctx->completion_lock);
 	list_for_each_entry_safe(poll_req, next, &ctx->cancel_list, list) {
-		if (sqe->addr == poll_req->user_data) {
+		if (READ_ONCE(sqe->addr) == poll_req->user_data) {
 			io_poll_remove_one(poll_req);
 			ret = 0;
 			break;
@@ -1145,7 +1160,10 @@ static int io_poll_add(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	struct io_poll_iocb *poll = &req->poll;
 	struct io_ring_ctx *ctx = req->ctx;
 	struct io_poll_table ipt;
+	unsigned flags;
 	__poll_t mask;
+	u16 events;
+	int fd;
 
 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
 		return -EINVAL;
@@ -1153,15 +1171,18 @@ static int io_poll_add(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 		return -EINVAL;
 
 	INIT_WORK(&req->work, io_poll_complete_work);
-	poll->events = demangle_poll(sqe->poll_events) | EPOLLERR | EPOLLHUP;
+	events = READ_ONCE(sqe->poll_events);
+	poll->events = demangle_poll(events) | EPOLLERR | EPOLLHUP;
 
-	if (sqe->flags & IOSQE_FIXED_FILE) {
-		if (unlikely(!ctx->user_files || sqe->fd >= ctx->nr_user_files))
+	flags = READ_ONCE(sqe->flags);
+	fd = READ_ONCE(sqe->fd);
+	if (flags & IOSQE_FIXED_FILE) {
+		if (unlikely(!ctx->user_files || fd >= ctx->nr_user_files))
 			return -EBADF;
-		poll->file = ctx->user_files[sqe->fd];
+		poll->file = ctx->user_files[fd];
 		req->flags |= REQ_F_FIXED_FILE;
 	} else {
-		poll->file = fget(sqe->fd);
+		poll->file = fget(fd);
 	}
 	if (unlikely(!poll->file))
 		return -EBADF;
@@ -1207,7 +1228,7 @@ static int io_poll_add(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 
 out:
 	if (unlikely(ipt.error)) {
-		if (!(sqe->flags & IOSQE_FIXED_FILE))
+		if (!(flags & IOSQE_FIXED_FILE))
 			fput(poll->file);
 		return ipt.error;
 	}
@@ -1224,15 +1245,17 @@ static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 {
 	const struct io_uring_sqe *sqe = s->sqe;
 	ssize_t ret;
+	int opcode;
 
 	if (unlikely(s->index >= ctx->sq_entries))
 		return -EINVAL;
-	req->user_data = sqe->user_data;
+	req->user_data = READ_ONCE(sqe->user_data);
 
 	ret = -EINVAL;
-	switch (sqe->opcode) {
+	opcode = READ_ONCE(sqe->opcode);
+	switch (opcode) {
 	case IORING_OP_NOP:
-		ret = io_nop(req, sqe);
+		ret = io_nop(req, req->user_data);
 		break;
 	case IORING_OP_READV:
 		if (unlikely(sqe->buf_index))
@@ -1317,7 +1340,7 @@ static void io_sq_wq_submit_work(struct work_struct *work)
 restart:
 	do {
 		struct sqe_submit *s = &req->submit;
-		u64 user_data = s->sqe->user_data;
+		u64 user_data = READ_ONCE(s->sqe->user_data);
 
 		/* Ensure we clear previously set forced non-block flag */
 		req->flags &= ~REQ_F_FORCE_NONBLOCK;

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-28 21:35 ` [PATCH 05/18] Add io_uring IO interface Jens Axboe
                     ` (3 preceding siblings ...)
  2019-01-29  1:29   ` Jann Horn
@ 2019-01-29  7:12   ` Bert Wesarg
  2019-01-29 12:12   ` Florian Weimer
  5 siblings, 0 replies; 62+ messages in thread
From: Bert Wesarg @ 2019-01-29  7:12 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-aio, linux-block, linux-man, linux-api, hch, jmoyer, avi

On Mon, Jan 28, 2019 at 10:35 PM Jens Axboe <axboe@kernel.dk> wrote:
>
> The submission queue (SQ) and completion queue (CQ) rings are shared
> between the application and the kernel. This eliminates the need to
> copy data back and forth to submit and complete IO.
>
> IO submissions use the io_uring_sqe data structure, and completions
> are generated in the form of io_uring_sqe data structures. The SQ
> ring is an index into the io_uring_sqe array, which makes it possible
> to submit a batch of IOs without them being contiguous in the ring.
> The CQ ring is always contiguous, as completion events are inherently
> unordered, and hence any io_uring_cqe entry can point back to an
> arbitrary submission.
>
> Two new system calls are added for this:
>
> io_uring_setup(entries, params)
>         Sets up a context for doing async IO. On success, returns a file
>         descriptor that the application can mmap to gain access to the
>         SQ ring, CQ ring, and io_uring_sqes.
>
> io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
>         Initiates IO against the rings mapped to this fd, or waits for
>         them to complete, or both. The behavior is controlled by the
>         parameters passed in. If 'to_submit' is non-zero, then we'll
>         try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
>         kernel will wait for 'min_complete' events, if they aren't
>         already available. It's valid to set IORING_ENTER_GETEVENTS
>         and 'min_complete' == 0 at the same time, this allows the
>         kernel to return already completed events without waiting
>         for them. This is useful only for polling, as for IRQ
>         driven IO, the application can just check the CQ ring
>         without entering the kernel.
>
> With this setup, it's possible to do async IO with a single system
> call. Future developments will enable polled IO with this interface,
> and polled submission as well. The latter will enable an application
> to do IO without doing ANY system calls at all.
>
> For IRQ driven IO, an application only needs to enter the kernel for
> completions if it wants to wait for them to occur.
>
> Each io_uring is backed by a workqueue, to support buffered async IO
> as well. We will only punt to an async context if the command would
> need to wait for IO on the device side. Any data that can be accessed
> directly in the page cache is done inline. This avoids the slowness
> issue of usual threadpools, since cached data is accessed as quickly
> as a sync interface.
>
> Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
>
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>  arch/x86/entry/syscalls/syscall_32.tbl |    2 +
>  arch/x86/entry/syscalls/syscall_64.tbl |    2 +
>  fs/Makefile                            |    1 +
>  fs/io_uring.c                          | 1090 ++++++++++++++++++++++++
>  include/linux/syscalls.h               |    6 +
>  include/uapi/asm-generic/unistd.h      |    6 +-
>  include/uapi/linux/io_uring.h          |   96 +++
>  init/Kconfig                           |    9 +
>  kernel/sys_ni.c                        |    2 +
>  9 files changed, 1213 insertions(+), 1 deletion(-)
>  create mode 100644 fs/io_uring.c
>  create mode 100644 include/uapi/linux/io_uring.h
>
> diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
> new file mode 100644
> index 000000000000..ce65db9269a8
> --- /dev/null
> +++ b/include/uapi/linux/io_uring.h
> @@ -0,0 +1,96 @@
> +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
> +/*
> + * Header file for the io_uring interface.
> + *
> + * Copyright (C) 2019 Jens Axboe
> + * Copyright (C) 2019 Christoph Hellwig
> + */
> +#ifndef LINUX_IO_URING_H
> +#define LINUX_IO_URING_H
> +
> +#include <linux/fs.h>
> +#include <linux/types.h>
> +
> +#define IORING_MAX_ENTRIES     4096
> +
> +/*
> + * IO submission data structure (Submission Queue Entry)
> + */
> +struct io_uring_sqe {
> +       __u8    opcode;         /* type of operation for this sqe */
> +       __u8    flags;          /* as of now unused */
> +       __u16   ioprio;         /* ioprio for the request */
> +       __s32   fd;             /* file descriptor to do IO on */
> +       __u64   off;            /* offset into file */
> +       __u64   addr;           /* pointer to buffer or iovecs */
> +       __u32   len;            /* buffer size or number of iovecs */
> +       union {
> +               __kernel_rwf_t  rw_flags;
> +               __u32           __resv;
> +       };
> +       __u64   user_data;      /* data to be passed back at completion time */
> +       __u64   __pad2[3];
> +};
> +
> +#define IORING_OP_NOP          0
> +#define IORING_OP_READV                1
> +#define IORING_OP_WRITEV       2
> +
> +/*
> + * IO completion data structure (Completion Queue Entry)
> + */
> +struct io_uring_cqe {
> +       __u64   user_data;      /* sqe->data submission passed back */
> +       __s32   res;            /* result code for this event */
> +       __u32   flags;
> +};
> +
> +/*
> + * Magic offsets for the application to mmap the data it needs
> + */
> +#define IORING_OFF_SQ_RING             0ULL
> +#define IORING_OFF_CQ_RING             0x8000000ULL
> +#define IORING_OFF_SQES                        0x10000000ULL
> +
> +/*
> + * Filled with the offset for mmap(2)
> + */
> +struct io_sqring_offsets {
> +       __u32 head;
> +       __u32 tail;
> +       __u32 ring_mask;
> +       __u32 ring_entries;
> +       __u32 flags;
> +       __u32 dropped;
> +       __u32 array;
> +       __u32 resv[3];
> +};
> +
> +struct io_cqring_offsets {
> +       __u32 head;
> +       __u32 tail;
> +       __u32 ring_mask;
> +       __u32 ring_entries;
> +       __u32 overflow;
> +       __u32 cqes;
> +       __u32 resv[4];
> +};
> +
> +/*
> + * io_uring_enter(2) flags
> + */
> +#define IORING_ENTER_GETEVENTS (1 << 0)
> +
> +/*
> + * Passed in for io_uring_setup(2). Copied back with updated info on success
> + */
> +struct io_uring_params {
> +       __u32 sq_entries;
> +       __u32 cq_entries;
> +       __u32 flags;
> +       __u16 resv[10];
> +       struct io_sqring_offsets sq_off;
> +       struct io_cqring_offsets cq_off;
> +};
> +
> +#endif

from a user perspective, it should always be easier if all exported
symbols and macros have a common prefix. Here it seems particular
worrisome, because of the missing 'u' in in the defines.

Best,
Bert

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-28 21:35 ` [PATCH 05/18] Add io_uring IO interface Jens Axboe
                     ` (4 preceding siblings ...)
  2019-01-29  7:12   ` Bert Wesarg
@ 2019-01-29 12:12   ` Florian Weimer
  2019-01-29 13:35     ` Jens Axboe
  5 siblings, 1 reply; 62+ messages in thread
From: Florian Weimer @ 2019-01-29 12:12 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-aio, linux-block, linux-man, linux-api, hch, jmoyer, avi

* Jens Axboe:

> +#define IORING_MAX_ENTRIES	4096

Where does this constant come from?  Should it really be exposed to
userspace?

> +struct io_uring_params {
> +	__u32 sq_entries;
> +	__u32 cq_entries;
> +	__u32 flags;
> +	__u16 resv[10];
> +	struct io_sqring_offsets sq_off;
> +	struct io_cqring_offsets cq_off;
> +};

> +struct io_sqring_offsets {
> +	__u32 head;
> +	__u32 tail;
> +	__u32 ring_mask;
> +	__u32 ring_entries;
> +	__u32 flags;
> +	__u32 dropped;
> +	__u32 array;
> +	__u32 resv[3];
> +};
> +
> +struct io_cqring_offsets {
> +	__u32 head;
> +	__u32 tail;
> +	__u32 ring_mask;
> +	__u32 ring_entries;
> +	__u32 overflow;
> +	__u32 cqes;
> +	__u32 resv[4];
> +};

Should the reserved fields include a __u64 member, to increase struct
alignment on architectures that might need it in the future?

> +#define IORING_ENTER_GETEVENTS	(1 << 0)

Should this be unsigned, to match the u32 flags argument?  (Otherwise
using 32 flags can be difficult).

Thanks,
Florian

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-29 12:12   ` Florian Weimer
@ 2019-01-29 13:35     ` Jens Axboe
  0 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-29 13:35 UTC (permalink / raw)
  To: Florian Weimer
  Cc: linux-aio, linux-block, linux-man, linux-api, hch, jmoyer, avi

On 1/29/19 5:12 AM, Florian Weimer wrote:
> * Jens Axboe:
> 
>> +#define IORING_MAX_ENTRIES	4096
> 
> Where does this constant come from?  Should it really be exposed to
> userspace?

Seems pretty handy for the application to know what the limit is?

>> +struct io_uring_params {
>> +	__u32 sq_entries;
>> +	__u32 cq_entries;
>> +	__u32 flags;
>> +	__u16 resv[10];
>> +	struct io_sqring_offsets sq_off;
>> +	struct io_cqring_offsets cq_off;
>> +};
> 
>> +struct io_sqring_offsets {
>> +	__u32 head;
>> +	__u32 tail;
>> +	__u32 ring_mask;
>> +	__u32 ring_entries;
>> +	__u32 flags;
>> +	__u32 dropped;
>> +	__u32 array;
>> +	__u32 resv[3];
>> +};
>> +
>> +struct io_cqring_offsets {
>> +	__u32 head;
>> +	__u32 tail;
>> +	__u32 ring_mask;
>> +	__u32 ring_entries;
>> +	__u32 overflow;
>> +	__u32 cqes;
>> +	__u32 resv[4];
>> +};
> 
> Should the reserved fields include a __u64 member, to increase struct
> alignment on architectures that might need it in the future?

Sure, I can do that.

>> +#define IORING_ENTER_GETEVENTS	(1 << 0)
> 
> Should this be unsigned, to match the u32 flags argument?  (Otherwise
> using 32 flags can be difficult).

Good point, I've changed them all to be unsigned.


-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-29  3:46       ` Jens Axboe
@ 2019-01-29 15:56         ` Jann Horn
  2019-01-29 16:06           ` Jens Axboe
  0 siblings, 1 reply; 62+ messages in thread
From: Jann Horn @ 2019-01-29 15:56 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On Tue, Jan 29, 2019 at 4:46 AM Jens Axboe <axboe@kernel.dk> wrote:
> On 1/28/19 7:21 PM, Jann Horn wrote:
> > Please create a local copy of the request before parsing it to keep
> > the data from changing under you. Additionally, it might make sense to
> > annotate every pointer to shared memory with a comment, or something
> > like that, to ensure that anyone looking at the code can immediately
> > see for which pointers special caution is required on access.
>
> I took a look at the viability of NOT having to local copy the data, and
> I don't think it's too bad. Local copy has a noticeable impact on the
> performance, hence I'd really (REALLY) like to avoid it.
>
> Here's something on top of the current git branch. I think I even went a
> bit too far in some areas, but it should hopefully catch the cases where
> we might end up double evaluating the parts of the sqe that we depend
> on. For most of the sqe reading we don't really care too much. For
> instance, the sqe->user_data. If the app changes this field, then it
> just gets whatever passed back in cqe->user_data. That's not a kernel
> issue.
>
> For cases like addr/len etc validation, it should be sound. I'll double
> check this in the morning as well, and obviously would need to be folded
> in along the way.
>
> I'd appreciate your opinion on this part, if you see any major issues
> with it, or if I missed something.

The io_sqe_needs_user() checks still look racy. If that helper sees a
IORING_OP_READ_FIXED, but then __io_submit_sqe() sees a
IORING_OP_READV - especially if this happens in io_sq_wq_submit_work()
-, I think you could potentially end up in places like
io_import_iovec() without having done the set_fs(USER_DS) and
use_mm(), causing the access to potentially occur with KERNEL_DS and a
lazy mm.

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-29 15:56         ` Jann Horn
@ 2019-01-29 16:06           ` Jens Axboe
  0 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-29 16:06 UTC (permalink / raw)
  To: Jann Horn
  Cc: linux-aio, linux-block, linux-man, Linux API, hch, jmoyer, Avi Kivity

On 1/29/19 8:56 AM, Jann Horn wrote:
> On Tue, Jan 29, 2019 at 4:46 AM Jens Axboe <axboe@kernel.dk> wrote:
>> On 1/28/19 7:21 PM, Jann Horn wrote:
>>> Please create a local copy of the request before parsing it to keep
>>> the data from changing under you. Additionally, it might make sense to
>>> annotate every pointer to shared memory with a comment, or something
>>> like that, to ensure that anyone looking at the code can immediately
>>> see for which pointers special caution is required on access.
>>
>> I took a look at the viability of NOT having to local copy the data, and
>> I don't think it's too bad. Local copy has a noticeable impact on the
>> performance, hence I'd really (REALLY) like to avoid it.
>>
>> Here's something on top of the current git branch. I think I even went a
>> bit too far in some areas, but it should hopefully catch the cases where
>> we might end up double evaluating the parts of the sqe that we depend
>> on. For most of the sqe reading we don't really care too much. For
>> instance, the sqe->user_data. If the app changes this field, then it
>> just gets whatever passed back in cqe->user_data. That's not a kernel
>> issue.
>>
>> For cases like addr/len etc validation, it should be sound. I'll double
>> check this in the morning as well, and obviously would need to be folded
>> in along the way.
>>
>> I'd appreciate your opinion on this part, if you see any major issues
>> with it, or if I missed something.
> 
> The io_sqe_needs_user() checks still look racy. If that helper sees a
> IORING_OP_READ_FIXED, but then __io_submit_sqe() sees a
> IORING_OP_READV - especially if this happens in io_sq_wq_submit_work()
> -, I think you could potentially end up in places like
> io_import_iovec() without having done the set_fs(USER_DS) and
> use_mm(), causing the access to potentially occur with KERNEL_DS and a
> lazy mm.

Indeed, for that case I think we should just copy the sqe. It's in the
async offload context anyway, so a copy won't really change anything
in terms of performance. And since the gap is so large between the two
problematic spots, it'd be trickier to fix.

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 07/18] io_uring: support for IO polling
  2019-01-28 21:35 ` [PATCH 07/18] io_uring: support for IO polling Jens Axboe
@ 2019-01-29 17:24   ` Christoph Hellwig
  2019-01-29 18:31     ` Jens Axboe
  0 siblings, 1 reply; 62+ messages in thread
From: Christoph Hellwig @ 2019-01-29 17:24 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-aio, linux-block, linux-man, linux-api, hch, jmoyer, avi

>  
> @@ -118,12 +120,16 @@ struct io_kiocb {
>  	struct list_head	list;
>  	unsigned int		flags;
>  #define REQ_F_FORCE_NONBLOCK	1	/* inline submission attempt */
> +#define REQ_F_IOPOLL_COMPLETED	2	/* polled IO has completed */
> +#define REQ_F_IOPOLL_EAGAIN	4	/* submission got EAGAIN */
>  	u64			user_data;
> +	u64			res;

Should this be ret or error instead?  res is kinda off.  A little
comment describing it won't hurt either.  Last but not least with
the actual errno value stored here we probably don't need the
REQ_F_IOPOLL_EAGAIN flag, do we?

> +	/*
> +	 * Only spin for completions if we don't have multiple devices hanging
> +	 * off our complete list, and we're under the requested amount.
> +	 */
> +	spin = !ctx->poll_multi_file && (*nr_events < min);

no need for the braces here.

> +static int io_iopoll_getevents(struct io_ring_ctx *ctx, unsigned int *nr_events,
> +				long min)
> +{
> +	int ret;
> +
> +	do {
> +		if (list_empty(&ctx->poll_list))
> +			return 0;
> +
> +		ret = io_do_iopoll(ctx, nr_events, min);
> +		if (ret < 0)
> +			break;
> +	} while (min && *nr_events < min);
> +
> +	if (ret < 0)
> +		return ret;
> +
> +	return *nr_events < min;

The code looks a little clumsy to me.  Why not:

	while (!list_empty(&ctx->poll_list)) {
		int ret = io_do_iopoll(ctx, nr_events, min);
		if (ret)
			return ret;

		if (!min || *nr_events >= min)
			return 0;
	}

	return 1;

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 10/18] io_uring: batch io_kiocb allocation
  2019-01-28 21:35 ` [PATCH 10/18] io_uring: batch io_kiocb allocation Jens Axboe
@ 2019-01-29 17:26   ` Christoph Hellwig
  2019-01-29 18:14     ` Jens Axboe
  0 siblings, 1 reply; 62+ messages in thread
From: Christoph Hellwig @ 2019-01-29 17:26 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-aio, linux-block, linux-man, linux-api, hch, jmoyer, avi

> -static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx)
> +static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx,
> +				   struct io_submit_state *state)
>  {
> +	gfp_t gfp = GFP_ATOMIC | __GFP_NOWARN;

No actually in this patch, but why do we do GFP_ATOMIC allocations
here?  We aren't in irq context or under a spinlock.

> +	if (!state)
> +		req = kmem_cache_alloc(req_cachep, gfp);

Add a
		if (!req)
			goto out;

plus the missing braces here..

> +	else if (!state->free_reqs) {
> +		size_t sz;
> +		int ret;
> +
> +		sz = min_t(size_t, state->ios_left, ARRAY_SIZE(state->reqs));
> +		ret = kmem_cache_alloc_bulk(req_cachep, gfp, sz,
> +						state->reqs);
> +		if (ret <= 0)
> +			goto out;
> +		state->free_reqs = ret - 1;
> +		state->cur_req = 1;
> +		req = state->reqs[0];
> +	} else {
> +		req = state->reqs[state->cur_req];
> +		state->free_reqs--;
> +		state->cur_req++;
> +	}
> +
>  	if (req) {
>  		req->ctx = ctx;
>  		req->flags = 0;
>  		return req;
>  	}

... and we don't need this conditional, which would otherwise also
be in the fast path.

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 15/18] io_uring: add io_kiocb ref count
  2019-01-28 21:35 ` [PATCH 15/18] io_uring: add io_kiocb ref count Jens Axboe
@ 2019-01-29 17:26   ` Christoph Hellwig
  0 siblings, 0 replies; 62+ messages in thread
From: Christoph Hellwig @ 2019-01-29 17:26 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-aio, linux-block, linux-man, linux-api, hch, jmoyer, avi

On Mon, Jan 28, 2019 at 02:35:35PM -0700, Jens Axboe wrote:
> We'll use this for the POLL implementation. Regular requests will
> NOT be using references, so initialize it to 0. Any real use of
> the io_kiocb ref will initialize it to at least 2.

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 10/18] io_uring: batch io_kiocb allocation
  2019-01-29 17:26   ` Christoph Hellwig
@ 2019-01-29 18:14     ` Jens Axboe
  0 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-29 18:14 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-aio, linux-block, linux-man, linux-api, jmoyer, avi

On 1/29/19 10:26 AM, Christoph Hellwig wrote:
>> -static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx)
>> +static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx,
>> +				   struct io_submit_state *state)
>>  {
>> +	gfp_t gfp = GFP_ATOMIC | __GFP_NOWARN;
> 
> No actually in this patch, but why do we do GFP_ATOMIC allocations
> here?  We aren't in irq context or under a spinlock.

Just to avoid dipping into more expensive allocs. Probably not a big
deal, I'll remove the atomic.

> 
>> +	if (!state)
>> +		req = kmem_cache_alloc(req_cachep, gfp);
> 
> Add a
> 		if (!req)
> 			goto out;
> 
> plus the missing braces here..
> 
>> +	else if (!state->free_reqs) {
>> +		size_t sz;
>> +		int ret;
>> +
>> +		sz = min_t(size_t, state->ios_left, ARRAY_SIZE(state->reqs));
>> +		ret = kmem_cache_alloc_bulk(req_cachep, gfp, sz,
>> +						state->reqs);
>> +		if (ret <= 0)
>> +			goto out;
>> +		state->free_reqs = ret - 1;
>> +		state->cur_req = 1;
>> +		req = state->reqs[0];
>> +	} else {
>> +		req = state->reqs[state->cur_req];
>> +		state->free_reqs--;
>> +		state->cur_req++;
>> +	}
>> +
>>  	if (req) {
>>  		req->ctx = ctx;
>>  		req->flags = 0;
>>  		return req;
>>  	}
> 
> ... and we don't need this conditional, which would otherwise also
> be in the fast path.

Good idea, done.

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 07/18] io_uring: support for IO polling
  2019-01-29 17:24   ` Christoph Hellwig
@ 2019-01-29 18:31     ` Jens Axboe
  2019-01-29 19:10       ` Jens Axboe
  0 siblings, 1 reply; 62+ messages in thread
From: Jens Axboe @ 2019-01-29 18:31 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-aio, linux-block, linux-man, linux-api, jmoyer, avi

On 1/29/19 10:24 AM, Christoph Hellwig wrote:
>>  
>> @@ -118,12 +120,16 @@ struct io_kiocb {
>>  	struct list_head	list;
>>  	unsigned int		flags;
>>  #define REQ_F_FORCE_NONBLOCK	1	/* inline submission attempt */
>> +#define REQ_F_IOPOLL_COMPLETED	2	/* polled IO has completed */
>> +#define REQ_F_IOPOLL_EAGAIN	4	/* submission got EAGAIN */
>>  	u64			user_data;
>> +	u64			res;
> 
> Should this be ret or error instead?  res is kinda off.  A little
> comment describing it won't hurt either.  Last but not least with
> the actual errno value stored here we probably don't need the
> REQ_F_IOPOLL_EAGAIN flag, do we?

Yes good point, that flag pre-dates us having the error in there. I'll
rename the field, too.

>> +	/*
>> +	 * Only spin for completions if we don't have multiple devices hanging
>> +	 * off our complete list, and we're under the requested amount.
>> +	 */
>> +	spin = !ctx->poll_multi_file && (*nr_events < min);
> 
> no need for the braces here.

Killed

>> +static int io_iopoll_getevents(struct io_ring_ctx *ctx, unsigned int *nr_events,
>> +				long min)
>> +{
>> +	int ret;
>> +
>> +	do {
>> +		if (list_empty(&ctx->poll_list))
>> +			return 0;
>> +
>> +		ret = io_do_iopoll(ctx, nr_events, min);
>> +		if (ret < 0)
>> +			break;
>> +	} while (min && *nr_events < min);
>> +
>> +	if (ret < 0)
>> +		return ret;
>> +
>> +	return *nr_events < min;
> 
> The code looks a little clumsy to me.  Why not:
> 
> 	while (!list_empty(&ctx->poll_list)) {
> 		int ret = io_do_iopoll(ctx, nr_events, min);
> 		if (ret)
> 			return ret;
> 
> 		if (!min || *nr_events >= min)
> 			return 0;
> 	}
> 
> 	return 1;

I think you messed up the 0/1 here, how about this:

	while (!list_empty(&ctx->poll_list)) {
		int ret;

		ret = io_do_iopoll(ctx, nr_events, min);
		if (ret < 0)
			return ret;
		if (!min || *nr_events >= min)
			return 1;
	}

	return 0;

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 07/18] io_uring: support for IO polling
  2019-01-29 18:31     ` Jens Axboe
@ 2019-01-29 19:10       ` Jens Axboe
  2019-01-29 20:35         ` Jeff Moyer
  0 siblings, 1 reply; 62+ messages in thread
From: Jens Axboe @ 2019-01-29 19:10 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-aio, linux-block, linux-man, linux-api, jmoyer, avi

On 1/29/19 11:31 AM, Jens Axboe wrote:
>> The code looks a little clumsy to me.  Why not:
>>
>> 	while (!list_empty(&ctx->poll_list)) {
>> 		int ret = io_do_iopoll(ctx, nr_events, min);
>> 		if (ret)
>> 			return ret;
>>
>> 		if (!min || *nr_events >= min)
>> 			return 0;
>> 	}
>>
>> 	return 1;
> 
> I think you messed up the 0/1 here, how about this:
> 
> 	while (!list_empty(&ctx->poll_list)) {
> 		int ret;
> 
> 		ret = io_do_iopoll(ctx, nr_events, min);
> 		if (ret < 0)
> 			return ret;
> 		if (!min || *nr_events >= min)
> 			return 1;
> 	}
> 
> 	return 0;

Or I did... I think yours is correct.

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 07/18] io_uring: support for IO polling
  2019-01-29 19:10       ` Jens Axboe
@ 2019-01-29 20:35         ` Jeff Moyer
  2019-01-29 20:37           ` Jens Axboe
  0 siblings, 1 reply; 62+ messages in thread
From: Jeff Moyer @ 2019-01-29 20:35 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, linux-aio, linux-block, linux-man, linux-api, avi

Jens Axboe <axboe@kernel.dk> writes:

> On 1/29/19 11:31 AM, Jens Axboe wrote:
>>> The code looks a little clumsy to me.  Why not:
>>>
>>> 	while (!list_empty(&ctx->poll_list)) {
>>> 		int ret = io_do_iopoll(ctx, nr_events, min);
>>> 		if (ret)
>>> 			return ret;
>>>
>>> 		if (!min || *nr_events >= min)
>>> 			return 0;
>>> 	}
>>>
>>> 	return 1;
>> 
>> I think you messed up the 0/1 here, how about this:
>> 
>> 	while (!list_empty(&ctx->poll_list)) {
>> 		int ret;
>> 
>> 		ret = io_do_iopoll(ctx, nr_events, min);
>> 		if (ret < 0)
>> 			return ret;
>> 		if (!min || *nr_events >= min)
>> 			return 1;
>> 	}
>> 
>> 	return 0;
>
> Or I did... I think yours is correct.

maybe document the return code?  ;-)

-Jeff

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 07/18] io_uring: support for IO polling
  2019-01-29 20:35         ` Jeff Moyer
@ 2019-01-29 20:37           ` Jens Axboe
  0 siblings, 0 replies; 62+ messages in thread
From: Jens Axboe @ 2019-01-29 20:37 UTC (permalink / raw)
  To: Jeff Moyer
  Cc: Christoph Hellwig, linux-aio, linux-block, linux-man, linux-api, avi

On 1/29/19 1:35 PM, Jeff Moyer wrote:
> Jens Axboe <axboe@kernel.dk> writes:
> 
>> On 1/29/19 11:31 AM, Jens Axboe wrote:
>>>> The code looks a little clumsy to me.  Why not:
>>>>
>>>> 	while (!list_empty(&ctx->poll_list)) {
>>>> 		int ret = io_do_iopoll(ctx, nr_events, min);
>>>> 		if (ret)
>>>> 			return ret;
>>>>
>>>> 		if (!min || *nr_events >= min)
>>>> 			return 0;
>>>> 	}
>>>>
>>>> 	return 1;
>>>
>>> I think you messed up the 0/1 here, how about this:
>>>
>>> 	while (!list_empty(&ctx->poll_list)) {
>>> 		int ret;
>>>
>>> 		ret = io_do_iopoll(ctx, nr_events, min);
>>> 		if (ret < 0)
>>> 			return ret;
>>> 		if (!min || *nr_events >= min)
>>> 			return 1;
>>> 	}
>>>
>>> 	return 0;
>>
>> Or I did... I think yours is correct.
> 
> maybe document the return code?  ;-)

It is sort-of documented - <= 0 means "don't call me again", either
because of an error (< 0) or because we have what we need. 1 means
"I might have more".

But yes, maybe a comment...

-- 
Jens Axboe

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-01-28 23:59       ` Jann Horn
  2019-01-29  0:03         ` Jens Axboe
@ 2019-02-01 16:57         ` Matt Mullins
  2019-02-01 17:04           ` Jann Horn
  1 sibling, 1 reply; 62+ messages in thread
From: Matt Mullins @ 2019-02-01 16:57 UTC (permalink / raw)
  To: jannh, axboe
  Cc: linux-aio, linux-block, jmoyer, linux-api, hch, viro, linux-man, avi

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 13683 bytes --]

On Tue, 2019-01-29 at 00:59 +0100, Jann Horn wrote:
> On Tue, Jan 29, 2019 at 12:47 AM Jens Axboe <axboe@kernel.dk> wrote:
> > On 1/28/19 3:32 PM, Jann Horn wrote:
> > > On Mon, Jan 28, 2019 at 10:35 PM Jens Axboe <axboe@kernel.dk> wrote:
> > > > The submission queue (SQ) and completion queue (CQ) rings are shared
> > > > between the application and the kernel. This eliminates the need to
> > > > copy data back and forth to submit and complete IO.
> > > > 
> > > > IO submissions use the io_uring_sqe data structure, and completions
> > > > are generated in the form of io_uring_sqe data structures. The SQ
> > > > ring is an index into the io_uring_sqe array, which makes it possible
> > > > to submit a batch of IOs without them being contiguous in the ring.
> > > > The CQ ring is always contiguous, as completion events are inherently
> > > > unordered, and hence any io_uring_cqe entry can point back to an
> > > > arbitrary submission.
> > > > 
> > > > Two new system calls are added for this:
> > > > 
> > > > io_uring_setup(entries, params)
> > > >         Sets up a context for doing async IO. On success, returns a file
> > > >         descriptor that the application can mmap to gain access to the
> > > >         SQ ring, CQ ring, and io_uring_sqes.
> > > > 
> > > > io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
> > > >         Initiates IO against the rings mapped to this fd, or waits for
> > > >         them to complete, or both. The behavior is controlled by the
> > > >         parameters passed in. If 'to_submit' is non-zero, then we'll
> > > >         try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
> > > >         kernel will wait for 'min_complete' events, if they aren't
> > > >         already available. It's valid to set IORING_ENTER_GETEVENTS
> > > >         and 'min_complete' == 0 at the same time, this allows the
> > > >         kernel to return already completed events without waiting
> > > >         for them. This is useful only for polling, as for IRQ
> > > >         driven IO, the application can just check the CQ ring
> > > >         without entering the kernel.
> > > > 
> > > > With this setup, it's possible to do async IO with a single system
> > > > call. Future developments will enable polled IO with this interface,
> > > > and polled submission as well. The latter will enable an application
> > > > to do IO without doing ANY system calls at all.
> > > > 
> > > > For IRQ driven IO, an application only needs to enter the kernel for
> > > > completions if it wants to wait for them to occur.
> > > > 
> > > > Each io_uring is backed by a workqueue, to support buffered async IO
> > > > as well. We will only punt to an async context if the command would
> > > > need to wait for IO on the device side. Any data that can be accessed
> > > > directly in the page cache is done inline. This avoids the slowness
> > > > issue of usual threadpools, since cached data is accessed as quickly
> > > > as a sync interface.
> > > > 
> > > > Sample application: https://urldefense.proofpoint.com/v2/url?u=http-3A__git.kernel.dk_cgit_fio_plain_t_io-5Furing.c&d=DwIBaQ&c=5VD0RTtNlTh3ycd41b3MUw&r=pqM-eO4A2hNFhIFiX-7eGg&m=MGr14pOzNbC7Z-8_dV4GMiH3AbkkH0RSQoQ894Tu0yc&s=mgbcubzOMiCpFpnwW-HA3ey0YDYPkgMIZ7Bmy4w6Chc&e=
> > > 
> > > [...]
> > > > +static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
> > > > +                     bool force_nonblock)
> > > > +{
> > > > +       struct kiocb *kiocb = &req->rw;
> > > > +       int ret;
> > > > +
> > > > +       kiocb->ki_filp = fget(sqe->fd);
> > > > +       if (unlikely(!kiocb->ki_filp))
> > > > +               return -EBADF;
> > > > +       kiocb->ki_pos = sqe->off;
> > > > +       kiocb->ki_flags = iocb_flags(kiocb->ki_filp);
> > > > +       kiocb->ki_hint = ki_hint_validate(file_write_hint(kiocb->ki_filp));
> > > > +       if (sqe->ioprio) {
> > > > +               ret = ioprio_check_cap(sqe->ioprio);
> > > > +               if (ret)
> > > > +                       goto out_fput;
> > > > +
> > > > +               kiocb->ki_ioprio = sqe->ioprio;
> > > > +       } else
> > > > +               kiocb->ki_ioprio = get_current_ioprio();
> > > > +
> > > > +       ret = kiocb_set_rw_flags(kiocb, sqe->rw_flags);
> > > > +       if (unlikely(ret))
> > > > +               goto out_fput;
> > > > +       if (force_nonblock) {
> > > > +               kiocb->ki_flags |= IOCB_NOWAIT;
> > > > +               req->flags |= REQ_F_FORCE_NONBLOCK;
> > > > +       }
> > > > +       if (kiocb->ki_flags & IOCB_HIPRI) {
> > > > +               ret = -EINVAL;
> > > > +               goto out_fput;
> > > > +       }
> > > > +
> > > > +       kiocb->ki_complete = io_complete_rw;
> > > > +       return 0;
> > > > +out_fput:
> > > > +       fput(kiocb->ki_filp);
> > > > +       return ret;
> > > > +}
> > > 
> > > [...]
> > > > +static ssize_t io_read(struct io_kiocb *req, const struct io_uring_sqe *sqe,
> > > > +                      bool force_nonblock)
> > > > +{
> > > > +       struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
> > > > +       struct kiocb *kiocb = &req->rw;
> > > > +       struct iov_iter iter;
> > > > +       struct file *file;
> > > > +       ssize_t ret;
> > > > +
> > > > +       ret = io_prep_rw(req, sqe, force_nonblock);
> > > > +       if (ret)
> > > > +               return ret;
> > > > +       file = kiocb->ki_filp;
> > > > +
> > > > +       ret = -EBADF;
> > > > +       if (unlikely(!(file->f_mode & FMODE_READ)))
> > > > +               goto out_fput;
> > > > +       ret = -EINVAL;
> > > > +       if (unlikely(!file->f_op->read_iter))
> > > > +               goto out_fput;
> > > > +
> > > > +       ret = io_import_iovec(req->ctx, READ, sqe, &iovec, &iter);
> > > > +       if (ret)
> > > > +               goto out_fput;
> > > > +
> > > > +       ret = rw_verify_area(READ, file, &kiocb->ki_pos, iov_iter_count(&iter));
> > > > +       if (!ret) {
> > > > +               ssize_t ret2;
> > > > +
> > > > +               /* Catch -EAGAIN return for forced non-blocking submission */
> > > > +               ret2 = call_read_iter(file, kiocb, &iter);
> > > > +               if (!force_nonblock || ret2 != -EAGAIN)
> > > > +                       io_rw_done(kiocb, ret2);
> > > > +               else
> > > > +                       ret = -EAGAIN;
> > > > +       }
> > > > +       kfree(iovec);
> > > > +out_fput:
> > > > +       if (unlikely(ret))
> > > > +               fput(file);
> > > > +       return ret;
> > > > +}
> > > 
> > > [...]
> > > > +static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
> > > > +                          struct sqe_submit *s, bool force_nonblock)
> > > > +{
> > > > +       const struct io_uring_sqe *sqe = s->sqe;
> > > > +       ssize_t ret;
> > > > +
> > > > +       if (unlikely(s->index >= ctx->sq_entries))
> > > > +               return -EINVAL;
> > > > +       req->user_data = sqe->user_data;
> > > > +
> > > > +       ret = -EINVAL;
> > > > +       switch (sqe->opcode) {
> > > > +       case IORING_OP_NOP:
> > > > +               ret = io_nop(req, sqe);
> > > > +               break;
> > > > +       case IORING_OP_READV:
> > > > +               ret = io_read(req, sqe, force_nonblock);
> > > > +               break;
> > > > +       case IORING_OP_WRITEV:
> > > > +               ret = io_write(req, sqe, force_nonblock);
> > > > +               break;
> > > > +       default:
> > > > +               ret = -EINVAL;
> > > > +               break;
> > > > +       }
> > > > +
> > > > +       return ret;
> > > > +}
> > > > +
> > > > +static void io_sq_wq_submit_work(struct work_struct *work)
> > > > +{
> > > > +       struct io_kiocb *req = container_of(work, struct io_kiocb, work);
> > > > +       struct sqe_submit *s = &req->submit;
> > > > +       u64 user_data = s->sqe->user_data;
> > > > +       struct io_ring_ctx *ctx = req->ctx;
> > > > +       mm_segment_t old_fs = get_fs();
> > > > +       struct files_struct *old_files;
> > > > +       int ret;
> > > > +
> > > > +        /* Ensure we clear previously set forced non-block flag */
> > > > +       req->flags &= ~REQ_F_FORCE_NONBLOCK;
> > > > +
> > > > +       old_files = current->files;
> > > > +       current->files = ctx->sqo_files;
> > > 
> > > I think you're not supposed to twiddle with current->files without
> > > holding task_lock(current).
> > 
> > 'current' is the work queue item in this case, do we need to protect
> > against anything else? I can add the locking around the assignments
> > (both places).
> 
> Stuff like proc_fd_link() uses get_files_struct(), which grabs a
> reference to your current files_struct protected only by task_lock();
> and it doesn't use anything like READ_ONCE(), so even if the object
> lifetime is not a problem, get_files_struct() could potentially crash
> due to a double-read (reading task->files twice and assuming that the
> result will be the same). As far as I can tell, this procfs code also
> works on kernel threads.
> 
> > > > +       if (!mmget_not_zero(ctx->sqo_mm)) {
> > > > +               ret = -EFAULT;
> > > > +               goto err;
> > > > +       }
> > > > +
> > > > +       use_mm(ctx->sqo_mm);
> > > > +       set_fs(USER_DS);
> > > > +
> > > > +       ret = __io_submit_sqe(ctx, req, s, false);
> > > > +
> > > > +       set_fs(old_fs);
> > > > +       unuse_mm(ctx->sqo_mm);
> > > > +       mmput(ctx->sqo_mm);
> > > > +err:
> > > > +       if (ret) {
> > > > +               io_cqring_add_event(ctx, user_data, ret, 0);
> > > > +               io_free_req(req);
> > > > +       }
> > > > +       current->files = old_files;
> > > > +}
> > > 
> > > [...]
> > > > +static int io_sq_offload_start(struct io_ring_ctx *ctx)
> > > > +{
> > > > +       int ret;
> > > > +
> > > > +       ctx->sqo_mm = current->mm;
> > > 
> > > What keeps this thing alive?
> > 
> > I think we're deadling with the same thing as the files below, I'll
> > defer to that.
> > 
> > > > +       /*
> > > > +        * This is safe since 'current' has the fd installed, and if that gets
> > > > +        * closed on exit, then fops->release() is invoked which waits for the
> > > > +        * async contexts to flush and exit before exiting.
> > > > +        */
> > > > +       ret = -EBADF;
> > > > +       ctx->sqo_files = current->files;
> > > > +       if (!ctx->sqo_files)
> > > > +               goto err;
> > > 
> > > That's gnarly. Adding Al Viro to the thread.
> > > 
> > > I think you misunderstand the semantics of f_op->release. The ->flush
> > > handler is invoked whenever a file descriptor is closed through
> > > filp_close() (via deletion of the files_struct, sys_close(),
> > > sys_dup2(), ...), so if you had used that one, _maybe_ this would
> > > work. But the ->release handler only runs when the _last_ reference to
> > > a struct file has been dropped - so you can, for example, fork() a
> > > child, then exit() in the parent, and the ->release handler isn't
> > > invoked. So I don't see how this can work.
> > 
> > The anonfd is CLOEXEC. The idea is exactly that it only runs when the
> > last reference to the file has been dropped. Not sure why you think I
> > need ->flush() here?
> 
> Can't I just use fcntl(fd, F_SETFD, fd, 0) to clear the CLOEXEC flag?
> Or send the fd via SCM_RIGHTS?
> 
> > > But even if you had abused ->flush for this instead: close_files()
> > > currently has a comment in it that claims that "this is the last
> > > reference to the files structure"; this change would make that claim
> > > untrue.
> > 
> > Let me see if I can explain my intent better than that comment... We
> > know the parent who set up the io_uring instance will be around for as
> > long as io_uring instance persists.
> 
> That's the part that I think is wrong: As far as I can tell, the
> parent can go away and you won't notice.
> 
> Also, note that "the parent" is different things for ->files and ->mm.
> You can have a multithreaded process whose threads don't have the same
> ->files, or multiple process that share ->files without sharing ->mm,
> ...

This had actually been get_files_struct() in early versions, and I had
reported to Jens that it allows something like

int main() {
  struct io_uring_params uring_params = {
  	.flags = IORING_SETUP_SQPOLL,
  };
  int uring_fd = syscall(425 /* io_uring_setup */, 16, &uring_params);
}

to leak both the files_struct and the kthread, as the files_struct and
the uring context form a circular reference.  I haven't really come up
with a good way to reconcile the requirements here; perhaps we need an
exit_uring() akin to exit_aio()?

> > When we are tearing down the
> > io_uring, then we wait for any async contexts (like the one above) to
> > exit. Once they are exited, it's safe to proceed with the exit and
> > teardown ->files[].
> 
> But you only do that teardown on ->release, right? And ->release
> doesn't have much to do with the process lifetime.
> 
> > That's the idea... Not trying to be clever, some of this dates back to
> > the aio weirdness where it was impossible to have cross references like
> > this, since it would lead to teardown deadlocks with how exit_aio() is
> > used. I can probably grab a struct files reference above, but currently
> > I don't see why it's needed.
N‹§²æìr¸›zǧu©ž²Æ {\b­†éì¹»\x1c®&Þ–)îŨ¨Š{ayº\x1dÊÚ&j:+v‰¨’öœ’Šà\x16Šæ¢·¢ú(œ¸§»\x10\b:Çž†Ûiÿü0ÂKÚrJ+ƒö¢£ðèž×¦j)Z†·Ÿ

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-02-01 16:57         ` Matt Mullins
@ 2019-02-01 17:04           ` Jann Horn
  2019-02-01 17:23             ` Jann Horn
  0 siblings, 1 reply; 62+ messages in thread
From: Jann Horn @ 2019-02-01 17:04 UTC (permalink / raw)
  To: Matt Mullins
  Cc: axboe, linux-aio, linux-block, jmoyer, linux-api, hch, viro,
	linux-man, avi

On Fri, Feb 1, 2019 at 5:57 PM Matt Mullins <mmullins@fb.com> wrote:
> On Tue, 2019-01-29 at 00:59 +0100, Jann Horn wrote:
> > On Tue, Jan 29, 2019 at 12:47 AM Jens Axboe <axboe@kernel.dk> wrote:
> > > On 1/28/19 3:32 PM, Jann Horn wrote:
> > > > On Mon, Jan 28, 2019 at 10:35 PM Jens Axboe <axboe@kernel.dk> wrote:
> > > > > The submission queue (SQ) and completion queue (CQ) rings are shared
> > > > > between the application and the kernel. This eliminates the need to
> > > > > copy data back and forth to submit and complete IO.
> > > > >
> > > > > IO submissions use the io_uring_sqe data structure, and completions
> > > > > are generated in the form of io_uring_sqe data structures. The SQ
> > > > > ring is an index into the io_uring_sqe array, which makes it possible
> > > > > to submit a batch of IOs without them being contiguous in the ring.
> > > > > The CQ ring is always contiguous, as completion events are inherently
> > > > > unordered, and hence any io_uring_cqe entry can point back to an
> > > > > arbitrary submission.
> > > > >
> > > > > Two new system calls are added for this:
> > > > >
> > > > > io_uring_setup(entries, params)
> > > > >         Sets up a context for doing async IO. On success, returns a file
> > > > >         descriptor that the application can mmap to gain access to the
> > > > >         SQ ring, CQ ring, and io_uring_sqes.
> > > > >
> > > > > io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
> > > > >         Initiates IO against the rings mapped to this fd, or waits for
> > > > >         them to complete, or both. The behavior is controlled by the
> > > > >         parameters passed in. If 'to_submit' is non-zero, then we'll
> > > > >         try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
> > > > >         kernel will wait for 'min_complete' events, if they aren't
> > > > >         already available. It's valid to set IORING_ENTER_GETEVENTS
> > > > >         and 'min_complete' == 0 at the same time, this allows the
> > > > >         kernel to return already completed events without waiting
> > > > >         for them. This is useful only for polling, as for IRQ
> > > > >         driven IO, the application can just check the CQ ring
> > > > >         without entering the kernel.
> > > > >
> > > > > With this setup, it's possible to do async IO with a single system
> > > > > call. Future developments will enable polled IO with this interface,
> > > > > and polled submission as well. The latter will enable an application
> > > > > to do IO without doing ANY system calls at all.
> > > > >
> > > > > For IRQ driven IO, an application only needs to enter the kernel for
> > > > > completions if it wants to wait for them to occur.
> > > > >
> > > > > Each io_uring is backed by a workqueue, to support buffered async IO
> > > > > as well. We will only punt to an async context if the command would
> > > > > need to wait for IO on the device side. Any data that can be accessed
> > > > > directly in the page cache is done inline. This avoids the slowness
> > > > > issue of usual threadpools, since cached data is accessed as quickly
> > > > > as a sync interface.
> > > > >
> > > > > Sample application: https://urldefense.proofpoint.com/v2/url?u=http-3A__git.kernel.dk_cgit_fio_plain_t_io-5Furing.c&d=DwIBaQ&c=5VD0RTtNlTh3ycd41b3MUw&r=pqM-eO4A2hNFhIFiX-7eGg&m=MGr14pOzNbC7Z-8_dV4GMiH3AbkkH0RSQoQ894Tu0yc&s=mgbcubzOMiCpFpnwW-HA3ey0YDYPkgMIZ7Bmy4w6Chc&e=
> > > >
> > > > [...]
> > > > > +static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
> > > > > +                     bool force_nonblock)
> > > > > +{
> > > > > +       struct kiocb *kiocb = &req->rw;
> > > > > +       int ret;
> > > > > +
> > > > > +       kiocb->ki_filp = fget(sqe->fd);
> > > > > +       if (unlikely(!kiocb->ki_filp))
> > > > > +               return -EBADF;
> > > > > +       kiocb->ki_pos = sqe->off;
> > > > > +       kiocb->ki_flags = iocb_flags(kiocb->ki_filp);
> > > > > +       kiocb->ki_hint = ki_hint_validate(file_write_hint(kiocb->ki_filp));
> > > > > +       if (sqe->ioprio) {
> > > > > +               ret = ioprio_check_cap(sqe->ioprio);
> > > > > +               if (ret)
> > > > > +                       goto out_fput;
> > > > > +
> > > > > +               kiocb->ki_ioprio = sqe->ioprio;
> > > > > +       } else
> > > > > +               kiocb->ki_ioprio = get_current_ioprio();
> > > > > +
> > > > > +       ret = kiocb_set_rw_flags(kiocb, sqe->rw_flags);
> > > > > +       if (unlikely(ret))
> > > > > +               goto out_fput;
> > > > > +       if (force_nonblock) {
> > > > > +               kiocb->ki_flags |= IOCB_NOWAIT;
> > > > > +               req->flags |= REQ_F_FORCE_NONBLOCK;
> > > > > +       }
> > > > > +       if (kiocb->ki_flags & IOCB_HIPRI) {
> > > > > +               ret = -EINVAL;
> > > > > +               goto out_fput;
> > > > > +       }
> > > > > +
> > > > > +       kiocb->ki_complete = io_complete_rw;
> > > > > +       return 0;
> > > > > +out_fput:
> > > > > +       fput(kiocb->ki_filp);
> > > > > +       return ret;
> > > > > +}
> > > >
> > > > [...]
> > > > > +static ssize_t io_read(struct io_kiocb *req, const struct io_uring_sqe *sqe,
> > > > > +                      bool force_nonblock)
> > > > > +{
> > > > > +       struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
> > > > > +       struct kiocb *kiocb = &req->rw;
> > > > > +       struct iov_iter iter;
> > > > > +       struct file *file;
> > > > > +       ssize_t ret;
> > > > > +
> > > > > +       ret = io_prep_rw(req, sqe, force_nonblock);
> > > > > +       if (ret)
> > > > > +               return ret;
> > > > > +       file = kiocb->ki_filp;
> > > > > +
> > > > > +       ret = -EBADF;
> > > > > +       if (unlikely(!(file->f_mode & FMODE_READ)))
> > > > > +               goto out_fput;
> > > > > +       ret = -EINVAL;
> > > > > +       if (unlikely(!file->f_op->read_iter))
> > > > > +               goto out_fput;
> > > > > +
> > > > > +       ret = io_import_iovec(req->ctx, READ, sqe, &iovec, &iter);
> > > > > +       if (ret)
> > > > > +               goto out_fput;
> > > > > +
> > > > > +       ret = rw_verify_area(READ, file, &kiocb->ki_pos, iov_iter_count(&iter));
> > > > > +       if (!ret) {
> > > > > +               ssize_t ret2;
> > > > > +
> > > > > +               /* Catch -EAGAIN return for forced non-blocking submission */
> > > > > +               ret2 = call_read_iter(file, kiocb, &iter);
> > > > > +               if (!force_nonblock || ret2 != -EAGAIN)
> > > > > +                       io_rw_done(kiocb, ret2);
> > > > > +               else
> > > > > +                       ret = -EAGAIN;
> > > > > +       }
> > > > > +       kfree(iovec);
> > > > > +out_fput:
> > > > > +       if (unlikely(ret))
> > > > > +               fput(file);
> > > > > +       return ret;
> > > > > +}
> > > >
> > > > [...]
> > > > > +static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
> > > > > +                          struct sqe_submit *s, bool force_nonblock)
> > > > > +{
> > > > > +       const struct io_uring_sqe *sqe = s->sqe;
> > > > > +       ssize_t ret;
> > > > > +
> > > > > +       if (unlikely(s->index >= ctx->sq_entries))
> > > > > +               return -EINVAL;
> > > > > +       req->user_data = sqe->user_data;
> > > > > +
> > > > > +       ret = -EINVAL;
> > > > > +       switch (sqe->opcode) {
> > > > > +       case IORING_OP_NOP:
> > > > > +               ret = io_nop(req, sqe);
> > > > > +               break;
> > > > > +       case IORING_OP_READV:
> > > > > +               ret = io_read(req, sqe, force_nonblock);
> > > > > +               break;
> > > > > +       case IORING_OP_WRITEV:
> > > > > +               ret = io_write(req, sqe, force_nonblock);
> > > > > +               break;
> > > > > +       default:
> > > > > +               ret = -EINVAL;
> > > > > +               break;
> > > > > +       }
> > > > > +
> > > > > +       return ret;
> > > > > +}
> > > > > +
> > > > > +static void io_sq_wq_submit_work(struct work_struct *work)
> > > > > +{
> > > > > +       struct io_kiocb *req = container_of(work, struct io_kiocb, work);
> > > > > +       struct sqe_submit *s = &req->submit;
> > > > > +       u64 user_data = s->sqe->user_data;
> > > > > +       struct io_ring_ctx *ctx = req->ctx;
> > > > > +       mm_segment_t old_fs = get_fs();
> > > > > +       struct files_struct *old_files;
> > > > > +       int ret;
> > > > > +
> > > > > +        /* Ensure we clear previously set forced non-block flag */
> > > > > +       req->flags &= ~REQ_F_FORCE_NONBLOCK;
> > > > > +
> > > > > +       old_files = current->files;
> > > > > +       current->files = ctx->sqo_files;
> > > >
> > > > I think you're not supposed to twiddle with current->files without
> > > > holding task_lock(current).
> > >
> > > 'current' is the work queue item in this case, do we need to protect
> > > against anything else? I can add the locking around the assignments
> > > (both places).
> >
> > Stuff like proc_fd_link() uses get_files_struct(), which grabs a
> > reference to your current files_struct protected only by task_lock();
> > and it doesn't use anything like READ_ONCE(), so even if the object
> > lifetime is not a problem, get_files_struct() could potentially crash
> > due to a double-read (reading task->files twice and assuming that the
> > result will be the same). As far as I can tell, this procfs code also
> > works on kernel threads.
> >
> > > > > +       if (!mmget_not_zero(ctx->sqo_mm)) {
> > > > > +               ret = -EFAULT;
> > > > > +               goto err;
> > > > > +       }
> > > > > +
> > > > > +       use_mm(ctx->sqo_mm);
> > > > > +       set_fs(USER_DS);
> > > > > +
> > > > > +       ret = __io_submit_sqe(ctx, req, s, false);
> > > > > +
> > > > > +       set_fs(old_fs);
> > > > > +       unuse_mm(ctx->sqo_mm);
> > > > > +       mmput(ctx->sqo_mm);
> > > > > +err:
> > > > > +       if (ret) {
> > > > > +               io_cqring_add_event(ctx, user_data, ret, 0);
> > > > > +               io_free_req(req);
> > > > > +       }
> > > > > +       current->files = old_files;
> > > > > +}
> > > >
> > > > [...]
> > > > > +static int io_sq_offload_start(struct io_ring_ctx *ctx)
> > > > > +{
> > > > > +       int ret;
> > > > > +
> > > > > +       ctx->sqo_mm = current->mm;
> > > >
> > > > What keeps this thing alive?
> > >
> > > I think we're deadling with the same thing as the files below, I'll
> > > defer to that.
> > >
> > > > > +       /*
> > > > > +        * This is safe since 'current' has the fd installed, and if that gets
> > > > > +        * closed on exit, then fops->release() is invoked which waits for the
> > > > > +        * async contexts to flush and exit before exiting.
> > > > > +        */
> > > > > +       ret = -EBADF;
> > > > > +       ctx->sqo_files = current->files;
> > > > > +       if (!ctx->sqo_files)
> > > > > +               goto err;
> > > >
> > > > That's gnarly. Adding Al Viro to the thread.
> > > >
> > > > I think you misunderstand the semantics of f_op->release. The ->flush
> > > > handler is invoked whenever a file descriptor is closed through
> > > > filp_close() (via deletion of the files_struct, sys_close(),
> > > > sys_dup2(), ...), so if you had used that one, _maybe_ this would
> > > > work. But the ->release handler only runs when the _last_ reference to
> > > > a struct file has been dropped - so you can, for example, fork() a
> > > > child, then exit() in the parent, and the ->release handler isn't
> > > > invoked. So I don't see how this can work.
> > >
> > > The anonfd is CLOEXEC. The idea is exactly that it only runs when the
> > > last reference to the file has been dropped. Not sure why you think I
> > > need ->flush() here?
> >
> > Can't I just use fcntl(fd, F_SETFD, fd, 0) to clear the CLOEXEC flag?
> > Or send the fd via SCM_RIGHTS?
> >
> > > > But even if you had abused ->flush for this instead: close_files()
> > > > currently has a comment in it that claims that "this is the last
> > > > reference to the files structure"; this change would make that claim
> > > > untrue.
> > >
> > > Let me see if I can explain my intent better than that comment... We
> > > know the parent who set up the io_uring instance will be around for as
> > > long as io_uring instance persists.
> >
> > That's the part that I think is wrong: As far as I can tell, the
> > parent can go away and you won't notice.
> >
> > Also, note that "the parent" is different things for ->files and ->mm.
> > You can have a multithreaded process whose threads don't have the same
> > ->files, or multiple process that share ->files without sharing ->mm,
> > ...
>
> This had actually been get_files_struct() in early versions, and I had
> reported to Jens that it allows something like
>
> int main() {
>   struct io_uring_params uring_params = {
>         .flags = IORING_SETUP_SQPOLL,
>   };
>   int uring_fd = syscall(425 /* io_uring_setup */, 16, &uring_params);
> }
>
> to leak both the files_struct and the kthread, as the files_struct and
> the uring context form a circular reference.  I haven't really come up
> with a good way to reconcile the requirements here; perhaps we need an
> exit_uring() akin to exit_aio()?

Oh, yuck. Uuuh... can we make "struct files_struct" doubly-refcounted,
like "struct mm_struct"? One reference type to keep the contents
intact (the reference type you normally use, and the type used by
uring when the thread is running), and one reference type to just keep
the struct itself existing, but without preserving its contents
(reference held consistently by the uring thread)?

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-02-01 17:04           ` Jann Horn
@ 2019-02-01 17:23             ` Jann Horn
  2019-02-01 18:05               ` Al Viro
  0 siblings, 1 reply; 62+ messages in thread
From: Jann Horn @ 2019-02-01 17:23 UTC (permalink / raw)
  To: Matt Mullins, viro, axboe, linux-fsdevel
  Cc: linux-aio, linux-block, jmoyer, linux-api, hch, linux-man, avi

On Fri, Feb 1, 2019 at 6:04 PM Jann Horn <jannh@google.com> wrote:
>
> On Fri, Feb 1, 2019 at 5:57 PM Matt Mullins <mmullins@fb.com> wrote:
> > On Tue, 2019-01-29 at 00:59 +0100, Jann Horn wrote:
> > > On Tue, Jan 29, 2019 at 12:47 AM Jens Axboe <axboe@kernel.dk> wrote:
> > > > On 1/28/19 3:32 PM, Jann Horn wrote:
> > > > > On Mon, Jan 28, 2019 at 10:35 PM Jens Axboe <axboe@kernel.dk> wrote:
> > > > > > The submission queue (SQ) and completion queue (CQ) rings are shared
> > > > > > between the application and the kernel. This eliminates the need to
> > > > > > copy data back and forth to submit and complete IO.
> > > > > >
> > > > > > IO submissions use the io_uring_sqe data structure, and completions
> > > > > > are generated in the form of io_uring_sqe data structures. The SQ
> > > > > > ring is an index into the io_uring_sqe array, which makes it possible
> > > > > > to submit a batch of IOs without them being contiguous in the ring.
> > > > > > The CQ ring is always contiguous, as completion events are inherently
> > > > > > unordered, and hence any io_uring_cqe entry can point back to an
> > > > > > arbitrary submission.
> > > > > >
> > > > > > Two new system calls are added for this:
> > > > > >
> > > > > > io_uring_setup(entries, params)
> > > > > >         Sets up a context for doing async IO. On success, returns a file
> > > > > >         descriptor that the application can mmap to gain access to the
> > > > > >         SQ ring, CQ ring, and io_uring_sqes.
> > > > > >
> > > > > > io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
> > > > > >         Initiates IO against the rings mapped to this fd, or waits for
> > > > > >         them to complete, or both. The behavior is controlled by the
> > > > > >         parameters passed in. If 'to_submit' is non-zero, then we'll
> > > > > >         try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
> > > > > >         kernel will wait for 'min_complete' events, if they aren't
> > > > > >         already available. It's valid to set IORING_ENTER_GETEVENTS
> > > > > >         and 'min_complete' == 0 at the same time, this allows the
> > > > > >         kernel to return already completed events without waiting
> > > > > >         for them. This is useful only for polling, as for IRQ
> > > > > >         driven IO, the application can just check the CQ ring
> > > > > >         without entering the kernel.
> > > > > >
> > > > > > With this setup, it's possible to do async IO with a single system
> > > > > > call. Future developments will enable polled IO with this interface,
> > > > > > and polled submission as well. The latter will enable an application
> > > > > > to do IO without doing ANY system calls at all.
> > > > > >
> > > > > > For IRQ driven IO, an application only needs to enter the kernel for
> > > > > > completions if it wants to wait for them to occur.
> > > > > >
> > > > > > Each io_uring is backed by a workqueue, to support buffered async IO
> > > > > > as well. We will only punt to an async context if the command would
> > > > > > need to wait for IO on the device side. Any data that can be accessed
> > > > > > directly in the page cache is done inline. This avoids the slowness
> > > > > > issue of usual threadpools, since cached data is accessed as quickly
> > > > > > as a sync interface.
> > > > > >
> > > > > > Sample application: https://urldefense.proofpoint.com/v2/url?u=http-3A__git.kernel.dk_cgit_fio_plain_t_io-5Furing.c&d=DwIBaQ&c=5VD0RTtNlTh3ycd41b3MUw&r=pqM-eO4A2hNFhIFiX-7eGg&m=MGr14pOzNbC7Z-8_dV4GMiH3AbkkH0RSQoQ894Tu0yc&s=mgbcubzOMiCpFpnwW-HA3ey0YDYPkgMIZ7Bmy4w6Chc&e=
> > > > >
> > > > > [...]
> > > > > > +static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
> > > > > > +                     bool force_nonblock)
> > > > > > +{
> > > > > > +       struct kiocb *kiocb = &req->rw;
> > > > > > +       int ret;
> > > > > > +
> > > > > > +       kiocb->ki_filp = fget(sqe->fd);
> > > > > > +       if (unlikely(!kiocb->ki_filp))
> > > > > > +               return -EBADF;
> > > > > > +       kiocb->ki_pos = sqe->off;
> > > > > > +       kiocb->ki_flags = iocb_flags(kiocb->ki_filp);
> > > > > > +       kiocb->ki_hint = ki_hint_validate(file_write_hint(kiocb->ki_filp));
> > > > > > +       if (sqe->ioprio) {
> > > > > > +               ret = ioprio_check_cap(sqe->ioprio);
> > > > > > +               if (ret)
> > > > > > +                       goto out_fput;
> > > > > > +
> > > > > > +               kiocb->ki_ioprio = sqe->ioprio;
> > > > > > +       } else
> > > > > > +               kiocb->ki_ioprio = get_current_ioprio();
> > > > > > +
> > > > > > +       ret = kiocb_set_rw_flags(kiocb, sqe->rw_flags);
> > > > > > +       if (unlikely(ret))
> > > > > > +               goto out_fput;
> > > > > > +       if (force_nonblock) {
> > > > > > +               kiocb->ki_flags |= IOCB_NOWAIT;
> > > > > > +               req->flags |= REQ_F_FORCE_NONBLOCK;
> > > > > > +       }
> > > > > > +       if (kiocb->ki_flags & IOCB_HIPRI) {
> > > > > > +               ret = -EINVAL;
> > > > > > +               goto out_fput;
> > > > > > +       }
> > > > > > +
> > > > > > +       kiocb->ki_complete = io_complete_rw;
> > > > > > +       return 0;
> > > > > > +out_fput:
> > > > > > +       fput(kiocb->ki_filp);
> > > > > > +       return ret;
> > > > > > +}
> > > > >
> > > > > [...]
> > > > > > +static ssize_t io_read(struct io_kiocb *req, const struct io_uring_sqe *sqe,
> > > > > > +                      bool force_nonblock)
> > > > > > +{
> > > > > > +       struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
> > > > > > +       struct kiocb *kiocb = &req->rw;
> > > > > > +       struct iov_iter iter;
> > > > > > +       struct file *file;
> > > > > > +       ssize_t ret;
> > > > > > +
> > > > > > +       ret = io_prep_rw(req, sqe, force_nonblock);
> > > > > > +       if (ret)
> > > > > > +               return ret;
> > > > > > +       file = kiocb->ki_filp;
> > > > > > +
> > > > > > +       ret = -EBADF;
> > > > > > +       if (unlikely(!(file->f_mode & FMODE_READ)))
> > > > > > +               goto out_fput;
> > > > > > +       ret = -EINVAL;
> > > > > > +       if (unlikely(!file->f_op->read_iter))
> > > > > > +               goto out_fput;
> > > > > > +
> > > > > > +       ret = io_import_iovec(req->ctx, READ, sqe, &iovec, &iter);
> > > > > > +       if (ret)
> > > > > > +               goto out_fput;
> > > > > > +
> > > > > > +       ret = rw_verify_area(READ, file, &kiocb->ki_pos, iov_iter_count(&iter));
> > > > > > +       if (!ret) {
> > > > > > +               ssize_t ret2;
> > > > > > +
> > > > > > +               /* Catch -EAGAIN return for forced non-blocking submission */
> > > > > > +               ret2 = call_read_iter(file, kiocb, &iter);
> > > > > > +               if (!force_nonblock || ret2 != -EAGAIN)
> > > > > > +                       io_rw_done(kiocb, ret2);
> > > > > > +               else
> > > > > > +                       ret = -EAGAIN;
> > > > > > +       }
> > > > > > +       kfree(iovec);
> > > > > > +out_fput:
> > > > > > +       if (unlikely(ret))
> > > > > > +               fput(file);
> > > > > > +       return ret;
> > > > > > +}
> > > > >
> > > > > [...]
> > > > > > +static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
> > > > > > +                          struct sqe_submit *s, bool force_nonblock)
> > > > > > +{
> > > > > > +       const struct io_uring_sqe *sqe = s->sqe;
> > > > > > +       ssize_t ret;
> > > > > > +
> > > > > > +       if (unlikely(s->index >= ctx->sq_entries))
> > > > > > +               return -EINVAL;
> > > > > > +       req->user_data = sqe->user_data;
> > > > > > +
> > > > > > +       ret = -EINVAL;
> > > > > > +       switch (sqe->opcode) {
> > > > > > +       case IORING_OP_NOP:
> > > > > > +               ret = io_nop(req, sqe);
> > > > > > +               break;
> > > > > > +       case IORING_OP_READV:
> > > > > > +               ret = io_read(req, sqe, force_nonblock);
> > > > > > +               break;
> > > > > > +       case IORING_OP_WRITEV:
> > > > > > +               ret = io_write(req, sqe, force_nonblock);
> > > > > > +               break;
> > > > > > +       default:
> > > > > > +               ret = -EINVAL;
> > > > > > +               break;
> > > > > > +       }
> > > > > > +
> > > > > > +       return ret;
> > > > > > +}
> > > > > > +
> > > > > > +static void io_sq_wq_submit_work(struct work_struct *work)
> > > > > > +{
> > > > > > +       struct io_kiocb *req = container_of(work, struct io_kiocb, work);
> > > > > > +       struct sqe_submit *s = &req->submit;
> > > > > > +       u64 user_data = s->sqe->user_data;
> > > > > > +       struct io_ring_ctx *ctx = req->ctx;
> > > > > > +       mm_segment_t old_fs = get_fs();
> > > > > > +       struct files_struct *old_files;
> > > > > > +       int ret;
> > > > > > +
> > > > > > +        /* Ensure we clear previously set forced non-block flag */
> > > > > > +       req->flags &= ~REQ_F_FORCE_NONBLOCK;
> > > > > > +
> > > > > > +       old_files = current->files;
> > > > > > +       current->files = ctx->sqo_files;
> > > > >
> > > > > I think you're not supposed to twiddle with current->files without
> > > > > holding task_lock(current).
> > > >
> > > > 'current' is the work queue item in this case, do we need to protect
> > > > against anything else? I can add the locking around the assignments
> > > > (both places).
> > >
> > > Stuff like proc_fd_link() uses get_files_struct(), which grabs a
> > > reference to your current files_struct protected only by task_lock();
> > > and it doesn't use anything like READ_ONCE(), so even if the object
> > > lifetime is not a problem, get_files_struct() could potentially crash
> > > due to a double-read (reading task->files twice and assuming that the
> > > result will be the same). As far as I can tell, this procfs code also
> > > works on kernel threads.
> > >
> > > > > > +       if (!mmget_not_zero(ctx->sqo_mm)) {
> > > > > > +               ret = -EFAULT;
> > > > > > +               goto err;
> > > > > > +       }
> > > > > > +
> > > > > > +       use_mm(ctx->sqo_mm);
> > > > > > +       set_fs(USER_DS);
> > > > > > +
> > > > > > +       ret = __io_submit_sqe(ctx, req, s, false);
> > > > > > +
> > > > > > +       set_fs(old_fs);
> > > > > > +       unuse_mm(ctx->sqo_mm);
> > > > > > +       mmput(ctx->sqo_mm);
> > > > > > +err:
> > > > > > +       if (ret) {
> > > > > > +               io_cqring_add_event(ctx, user_data, ret, 0);
> > > > > > +               io_free_req(req);
> > > > > > +       }
> > > > > > +       current->files = old_files;
> > > > > > +}
> > > > >
> > > > > [...]
> > > > > > +static int io_sq_offload_start(struct io_ring_ctx *ctx)
> > > > > > +{
> > > > > > +       int ret;
> > > > > > +
> > > > > > +       ctx->sqo_mm = current->mm;
> > > > >
> > > > > What keeps this thing alive?
> > > >
> > > > I think we're deadling with the same thing as the files below, I'll
> > > > defer to that.
> > > >
> > > > > > +       /*
> > > > > > +        * This is safe since 'current' has the fd installed, and if that gets
> > > > > > +        * closed on exit, then fops->release() is invoked which waits for the
> > > > > > +        * async contexts to flush and exit before exiting.
> > > > > > +        */
> > > > > > +       ret = -EBADF;
> > > > > > +       ctx->sqo_files = current->files;
> > > > > > +       if (!ctx->sqo_files)
> > > > > > +               goto err;
> > > > >
> > > > > That's gnarly. Adding Al Viro to the thread.
> > > > >
> > > > > I think you misunderstand the semantics of f_op->release. The ->flush
> > > > > handler is invoked whenever a file descriptor is closed through
> > > > > filp_close() (via deletion of the files_struct, sys_close(),
> > > > > sys_dup2(), ...), so if you had used that one, _maybe_ this would
> > > > > work. But the ->release handler only runs when the _last_ reference to
> > > > > a struct file has been dropped - so you can, for example, fork() a
> > > > > child, then exit() in the parent, and the ->release handler isn't
> > > > > invoked. So I don't see how this can work.
> > > >
> > > > The anonfd is CLOEXEC. The idea is exactly that it only runs when the
> > > > last reference to the file has been dropped. Not sure why you think I
> > > > need ->flush() here?
> > >
> > > Can't I just use fcntl(fd, F_SETFD, fd, 0) to clear the CLOEXEC flag?
> > > Or send the fd via SCM_RIGHTS?
> > >
> > > > > But even if you had abused ->flush for this instead: close_files()
> > > > > currently has a comment in it that claims that "this is the last
> > > > > reference to the files structure"; this change would make that claim
> > > > > untrue.
> > > >
> > > > Let me see if I can explain my intent better than that comment... We
> > > > know the parent who set up the io_uring instance will be around for as
> > > > long as io_uring instance persists.
> > >
> > > That's the part that I think is wrong: As far as I can tell, the
> > > parent can go away and you won't notice.
> > >
> > > Also, note that "the parent" is different things for ->files and ->mm.
> > > You can have a multithreaded process whose threads don't have the same
> > > ->files, or multiple process that share ->files without sharing ->mm,
> > > ...
> >
> > This had actually been get_files_struct() in early versions, and I had
> > reported to Jens that it allows something like
> >
> > int main() {
> >   struct io_uring_params uring_params = {
> >         .flags = IORING_SETUP_SQPOLL,
> >   };
> >   int uring_fd = syscall(425 /* io_uring_setup */, 16, &uring_params);
> > }
> >
> > to leak both the files_struct and the kthread, as the files_struct and
> > the uring context form a circular reference.  I haven't really come up
> > with a good way to reconcile the requirements here; perhaps we need an
> > exit_uring() akin to exit_aio()?
>
> Oh, yuck. Uuuh... can we make "struct files_struct" doubly-refcounted,
> like "struct mm_struct"? One reference type to keep the contents
> intact (the reference type you normally use, and the type used by
> uring when the thread is running), and one reference type to just keep
> the struct itself existing, but without preserving its contents
> (reference held consistently by the uring thread)?

Something like this (completely untested); and then instead of the
current get_files_struct(), you'd do get_files_struct_weak(), and
while the thread is running, it protects the files_struct from dying
with tryget_weak_files_struct() / put_files_struct().

Al, do you have opinions on this?

===============
diff --git a/fs/file.c b/fs/file.c
index 3209ee271c41..fbf02ef2753d 100644
--- a/fs/file.c
+++ b/fs/file.c
@@ -281,6 +281,7 @@ struct files_struct *dup_fd(struct files_struct
*oldf, int *errorp)
        if (!newf)
                goto out;

+       kref_init(&newf->weak_refs);
        atomic_set(&newf->count, 1);

        spin_lock_init(&newf->file_lock);
@@ -410,6 +411,26 @@ struct files_struct *get_files_struct(struct
task_struct *task)
        return files;
 }

+static void free_files_struct(struct kref *ref) {
+       struct files_struct *files =
+               container_of(ref, struct files_struct, weak_refs);
+       kmem_cache_free(files_cachep, files);
+}
+
+void put_files_struct_weak(struct files_struct *files) {
+       kref_put(&files->weak_refs, free_files_struct);
+}
+
+struct files_struct *get_files_struct_weak(struct task_struct *task)
+{
+       struct files_struct *files = get_files_struct(task);
+       if (files) {
+               kref_get(&files->weak_refs);
+               put_files_struct(files);
+       }
+       return files;
+}
+
 void put_files_struct(struct files_struct *files)
 {
        if (atomic_dec_and_test(&files->count)) {
@@ -418,10 +439,17 @@ void put_files_struct(struct files_struct *files)
                /* free the arrays if they are not embedded */
                if (fdt != &files->fdtab)
                        __free_fdtable(fdt);
-               kmem_cache_free(files_cachep, files);
+               put_files_struct_weak(files);
        }
 }

+struct files_struct *tryget_weak_files_struct(struct files_struct *fs) {
+       if (atomic_inc_not_zero(&fs->count)) {
+               return fs;
+       }
+       return NULL;
+}
+
 void reset_files_struct(struct files_struct *files)
 {
        struct task_struct *tsk = current;
@@ -448,6 +476,7 @@ void exit_files(struct task_struct *tsk)

 struct files_struct init_files = {
        .count          = ATOMIC_INIT(1),
+       .weak_refs      = KREF_INIT(1),
        .fdt            = &init_files.fdtab,
        .fdtab          = {
                .max_fds        = NR_OPEN_DEFAULT,
diff --git a/include/linux/fdtable.h b/include/linux/fdtable.h
index f07c55ea0c22..6ad95a95cc0b 100644
--- a/include/linux/fdtable.h
+++ b/include/linux/fdtable.h
@@ -14,6 +14,7 @@
 #include <linux/types.h>
 #include <linux/init.h>
 #include <linux/fs.h>
+#include <linux/kref.h>

 #include <linux/atomic.h>

@@ -50,6 +51,7 @@ struct files_struct {
    * read mostly part
    */
        atomic_t count;
+       struct kref weak_refs;
        bool resize_in_progress;
        wait_queue_head_t resize_wait;

@@ -107,6 +109,9 @@ struct task_struct;

 struct files_struct *get_files_struct(struct task_struct *);
 void put_files_struct(struct files_struct *fs);
+void put_files_struct_weak(struct files_struct *files);
+struct files_struct *get_files_struct_weak(struct task_struct *);
+struct files_struct *tryget_weak_files_struct(struct files_struct *);
 void reset_files_struct(struct files_struct *);
 int unshare_files(struct files_struct **);
 struct files_struct *dup_fd(struct files_struct *, int *) __latent_entropy;
===============

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* Re: [PATCH 05/18] Add io_uring IO interface
  2019-02-01 17:23             ` Jann Horn
@ 2019-02-01 18:05               ` Al Viro
  0 siblings, 0 replies; 62+ messages in thread
From: Al Viro @ 2019-02-01 18:05 UTC (permalink / raw)
  To: Jann Horn
  Cc: Matt Mullins, axboe, linux-fsdevel, linux-aio, linux-block,
	jmoyer, linux-api, hch, linux-man, avi

On Fri, Feb 01, 2019 at 06:23:27PM +0100, Jann Horn wrote:

> > Oh, yuck. Uuuh... can we make "struct files_struct" doubly-refcounted,
> > like "struct mm_struct"? One reference type to keep the contents
> > intact (the reference type you normally use, and the type used by
> > uring when the thread is running), and one reference type to just keep
> > the struct itself existing, but without preserving its contents
> > (reference held consistently by the uring thread)?
> 
> Something like this (completely untested); and then instead of the
> current get_files_struct(), you'd do get_files_struct_weak(), and
> while the thread is running, it protects the files_struct from dying
> with tryget_weak_files_struct() / put_files_struct().
> 
> Al, do you have opinions on this?

Yes, but they are not fit for polite company.  IMO the entire approach
is FUBAR; I'll post more detailed review, but what I'd seen so far is
a veto fodder.

--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org.  For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

^ permalink raw reply	[flat|nested] 62+ messages in thread

end of thread, other threads:[~2019-02-01 18:05 UTC | newest]

Thread overview: 62+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-28 21:35 [PATCHSET v8] io_uring IO interface Jens Axboe
2019-01-28 21:35 ` [PATCH 01/18] fs: add an iopoll method to struct file_operations Jens Axboe
2019-01-28 21:35 ` [PATCH 02/18] block: wire up block device iopoll method Jens Axboe
2019-01-28 21:35 ` [PATCH 03/18] block: add bio_set_polled() helper Jens Axboe
2019-01-28 21:35 ` [PATCH 04/18] iomap: wire up the iopoll method Jens Axboe
2019-01-28 21:35 ` [PATCH 05/18] Add io_uring IO interface Jens Axboe
2019-01-28 21:53   ` Jeff Moyer
2019-01-28 21:56     ` Jens Axboe
2019-01-28 22:32   ` Jann Horn
2019-01-28 23:46     ` Jens Axboe
2019-01-28 23:59       ` Jann Horn
2019-01-29  0:03         ` Jens Axboe
2019-01-29  0:31           ` Jens Axboe
2019-01-29  0:34             ` Jann Horn
2019-01-29  0:55               ` Jens Axboe
2019-01-29  0:58                 ` Jann Horn
2019-01-29  1:01                   ` Jens Axboe
2019-02-01 16:57         ` Matt Mullins
2019-02-01 17:04           ` Jann Horn
2019-02-01 17:23             ` Jann Horn
2019-02-01 18:05               ` Al Viro
2019-01-29  1:07   ` Jann Horn
2019-01-29  2:21     ` Jann Horn
2019-01-29  2:54       ` Jens Axboe
2019-01-29  3:46       ` Jens Axboe
2019-01-29 15:56         ` Jann Horn
2019-01-29 16:06           ` Jens Axboe
2019-01-29  2:21     ` Jens Axboe
2019-01-29  1:29   ` Jann Horn
2019-01-29  1:31     ` Jens Axboe
2019-01-29  1:32       ` Jann Horn
2019-01-29  2:23         ` Jens Axboe
2019-01-29  7:12   ` Bert Wesarg
2019-01-29 12:12   ` Florian Weimer
2019-01-29 13:35     ` Jens Axboe
2019-01-28 21:35 ` [PATCH 06/18] io_uring: add fsync support Jens Axboe
2019-01-28 21:35 ` [PATCH 07/18] io_uring: support for IO polling Jens Axboe
2019-01-29 17:24   ` Christoph Hellwig
2019-01-29 18:31     ` Jens Axboe
2019-01-29 19:10       ` Jens Axboe
2019-01-29 20:35         ` Jeff Moyer
2019-01-29 20:37           ` Jens Axboe
2019-01-28 21:35 ` [PATCH 08/18] fs: add fget_many() and fput_many() Jens Axboe
2019-01-28 21:35 ` [PATCH 09/18] io_uring: use fget/fput_many() for file references Jens Axboe
2019-01-28 21:56   ` Jann Horn
2019-01-28 22:03     ` Jens Axboe
2019-01-28 21:35 ` [PATCH 10/18] io_uring: batch io_kiocb allocation Jens Axboe
2019-01-29 17:26   ` Christoph Hellwig
2019-01-29 18:14     ` Jens Axboe
2019-01-28 21:35 ` [PATCH 11/18] block: implement bio helper to add iter bvec pages to bio Jens Axboe
2019-01-28 21:35 ` [PATCH 12/18] io_uring: add support for pre-mapped user IO buffers Jens Axboe
2019-01-28 23:35   ` Jann Horn
2019-01-28 23:50     ` Jens Axboe
2019-01-29  0:36       ` Jann Horn
2019-01-29  1:25         ` Jens Axboe
2019-01-28 21:35 ` [PATCH 13/18] io_uring: add file set registration Jens Axboe
2019-01-28 21:35 ` [PATCH 14/18] io_uring: add submission polling Jens Axboe
2019-01-28 21:35 ` [PATCH 15/18] io_uring: add io_kiocb ref count Jens Axboe
2019-01-29 17:26   ` Christoph Hellwig
2019-01-28 21:35 ` [PATCH 16/18] io_uring: add support for IORING_OP_POLL Jens Axboe
2019-01-28 21:35 ` [PATCH 17/18] io_uring: allow workqueue item to handle multiple buffered requests Jens Axboe
2019-01-28 21:35 ` [PATCH 18/18] io_uring: add io_uring_event cache hit information Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).