linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF
@ 2023-03-30 16:46 Jens Axboe
  2023-03-30 16:46 ` [PATCH 01/11] block: ensure bio_alloc_map_data() deals with ITER_UBUF correctly Jens Axboe
                   ` (11 more replies)
  0 siblings, 12 replies; 18+ messages in thread
From: Jens Axboe @ 2023-03-30 16:46 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: torvalds, brauner, viro

Hi,

No real changes since v6, just sending it out so that whatever is in
my tree matches what has been sent out upstream. Changes since v6:

- Rearrange a few of the sound/IB patches to avoid them seeing
  ITER_UBUF in the middle of the series. End result is the same.

- Correct a few comments, notably one on why __ubuf_iovec isn't const.

Passes all my testing, and also re-ran the micro benchmark as it's
probably more relevant than my peak testing. In short, it's reading
4k from /dev/zero in a loop with readv. Before the patches, that'd
be turned into an ITER_IOVEC, and after an ITER_UBUF. Graph here:

https://kernel.dk/4k-zero-read.png

and in real numbers it ends up being a 3.7% reduction with using
ITER_UBUF. Sadly, in absolute numbers, comparing read(2) and readv(2),
the latter takes 2.11x as long in the stock kernel, and 2.01x as long
with the patches. So while single segment is better now than before,
it's still waaaay slower than having to copy in a single iovec. Testing
was run with all security mitigations off.


-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 01/11] block: ensure bio_alloc_map_data() deals with ITER_UBUF correctly
  2023-03-30 16:46 [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF Jens Axboe
@ 2023-03-30 16:46 ` Jens Axboe
  2023-03-30 16:46 ` [PATCH 02/11] iov_iter: add iter_iovec() helper Jens Axboe
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Jens Axboe @ 2023-03-30 16:46 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: torvalds, brauner, viro, Jens Axboe

This helper blindly copies the iovec, even if we don't have one.
Make this case a bit smarter by only doing so if we have an iovec
array to copy.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 block/blk-map.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/block/blk-map.c b/block/blk-map.c
index 9137d16cecdc..3bfcad64d67c 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -29,10 +29,11 @@ static struct bio_map_data *bio_alloc_map_data(struct iov_iter *data,
 	bmd = kmalloc(struct_size(bmd, iov, data->nr_segs), gfp_mask);
 	if (!bmd)
 		return NULL;
-	memcpy(bmd->iov, data->iov, sizeof(struct iovec) * data->nr_segs);
 	bmd->iter = *data;
-	if (iter_is_iovec(data))
+	if (iter_is_iovec(data)) {
+		memcpy(bmd->iov, data->iov, sizeof(struct iovec) * data->nr_segs);
 		bmd->iter.iov = bmd->iov;
+	}
 	return bmd;
 }
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 02/11] iov_iter: add iter_iovec() helper
  2023-03-30 16:46 [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF Jens Axboe
  2023-03-30 16:46 ` [PATCH 01/11] block: ensure bio_alloc_map_data() deals with ITER_UBUF correctly Jens Axboe
@ 2023-03-30 16:46 ` Jens Axboe
  2023-03-30 16:46 ` [PATCH 03/11] IB/hfi1: check for user backed iterator, not specific iterator type Jens Axboe
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Jens Axboe @ 2023-03-30 16:46 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: torvalds, brauner, viro, Jens Axboe

This returns a pointer to the current iovec entry in the iterator. Only
useful with ITER_IOVEC right now, but it prepares us to treat ITER_UBUF
and ITER_IOVEC identically for the first segment.

Rename struct iov_iter->iov to iov_iter->__iov to find any potentially
troublesome spots, and also to prevent anyone from adding new code that
accesses iter->iov directly.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 block/blk-map.c                          |  4 +-
 drivers/infiniband/hw/hfi1/file_ops.c    |  3 +-
 drivers/infiniband/hw/qib/qib_file_ops.c |  2 +-
 drivers/net/tun.c                        |  3 +-
 drivers/vhost/scsi.c                     |  2 +-
 fs/btrfs/file.c                          | 11 +++--
 fs/fuse/file.c                           |  2 +-
 include/linux/uio.h                      |  9 ++--
 io_uring/net.c                           |  4 +-
 io_uring/rw.c                            |  8 ++--
 lib/iov_iter.c                           | 56 +++++++++++++-----------
 sound/core/pcm_native.c                  | 22 ++++++----
 12 files changed, 73 insertions(+), 53 deletions(-)

diff --git a/block/blk-map.c b/block/blk-map.c
index 3bfcad64d67c..04c55f1c492e 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -31,8 +31,8 @@ static struct bio_map_data *bio_alloc_map_data(struct iov_iter *data,
 		return NULL;
 	bmd->iter = *data;
 	if (iter_is_iovec(data)) {
-		memcpy(bmd->iov, data->iov, sizeof(struct iovec) * data->nr_segs);
-		bmd->iter.iov = bmd->iov;
+		memcpy(bmd->iov, iter_iov(data), sizeof(struct iovec) * data->nr_segs);
+		bmd->iter.__iov = bmd->iov;
 	}
 	return bmd;
 }
diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
index b1d6ca7e9708..3065db9d6bb9 100644
--- a/drivers/infiniband/hw/hfi1/file_ops.c
+++ b/drivers/infiniband/hw/hfi1/file_ops.c
@@ -287,11 +287,12 @@ static ssize_t hfi1_write_iter(struct kiocb *kiocb, struct iov_iter *from)
 	}
 
 	while (dim) {
+		const struct iovec *iov = iter_iov(from);
 		int ret;
 		unsigned long count = 0;
 
 		ret = hfi1_user_sdma_process_request(
-			fd, (struct iovec *)(from->iov + done),
+			fd, (struct iovec *)(iov + done),
 			dim, &count);
 		if (ret) {
 			reqs = ret;
diff --git a/drivers/infiniband/hw/qib/qib_file_ops.c b/drivers/infiniband/hw/qib/qib_file_ops.c
index 80fe92a21f96..4cee39337866 100644
--- a/drivers/infiniband/hw/qib/qib_file_ops.c
+++ b/drivers/infiniband/hw/qib/qib_file_ops.c
@@ -2248,7 +2248,7 @@ static ssize_t qib_write_iter(struct kiocb *iocb, struct iov_iter *from)
 	if (!iter_is_iovec(from) || !from->nr_segs || !pq)
 		return -EINVAL;
 
-	return qib_user_sdma_writev(rcd, pq, from->iov, from->nr_segs);
+	return qib_user_sdma_writev(rcd, pq, iter_iov(from), from->nr_segs);
 }
 
 static struct class *qib_class;
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index ad653b32b2f0..5df1eba7b30a 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1486,7 +1486,8 @@ static struct sk_buff *tun_napi_alloc_frags(struct tun_file *tfile,
 	skb->truesize += skb->data_len;
 
 	for (i = 1; i < it->nr_segs; i++) {
-		size_t fragsz = it->iov[i].iov_len;
+		const struct iovec *iov = iter_iov(it);
+		size_t fragsz = iov->iov_len;
 		struct page *page;
 		void *frag;
 
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index b244e7c0f514..042caea64007 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -671,7 +671,7 @@ vhost_scsi_calc_sgls(struct iov_iter *iter, size_t bytes, int max_sgls)
 {
 	int sgl_count = 0;
 
-	if (!iter || !iter->iov) {
+	if (!iter || !iter_iov(iter)) {
 		pr_err("%s: iter->iov is NULL, but expected bytes: %zu"
 		       " present\n", __func__, bytes);
 		return -EINVAL;
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index 5cc5a1faaef5..f649647392e0 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -3730,10 +3730,15 @@ static int check_direct_read(struct btrfs_fs_info *fs_info,
 	if (!iter_is_iovec(iter))
 		return 0;
 
-	for (seg = 0; seg < iter->nr_segs; seg++)
-		for (i = seg + 1; i < iter->nr_segs; i++)
-			if (iter->iov[seg].iov_base == iter->iov[i].iov_base)
+	for (seg = 0; seg < iter->nr_segs; seg++) {
+		for (i = seg + 1; i < iter->nr_segs; i++) {
+			const struct iovec *iov1 = iter_iov(iter) + seg;
+			const struct iovec *iov2 = iter_iov(iter) + i;
+
+			if (iov1->iov_base == iov2->iov_base)
 				return -EINVAL;
+		}
+	}
 	return 0;
 }
 
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index de37a3a06a71..89d97f6188e0 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -1419,7 +1419,7 @@ static ssize_t fuse_cache_write_iter(struct kiocb *iocb, struct iov_iter *from)
 
 static inline unsigned long fuse_get_user_addr(const struct iov_iter *ii)
 {
-	return (unsigned long)ii->iov->iov_base + ii->iov_offset;
+	return (unsigned long)iter_iov(ii)->iov_base + ii->iov_offset;
 }
 
 static inline size_t fuse_get_frag_size(const struct iov_iter *ii,
diff --git a/include/linux/uio.h b/include/linux/uio.h
index 27e3fd942960..4218624b7f78 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -51,7 +51,8 @@ struct iov_iter {
 	};
 	size_t count;
 	union {
-		const struct iovec *iov;
+		/* use iter_iov() to get the current vec */
+		const struct iovec *__iov;
 		const struct kvec *kvec;
 		const struct bio_vec *bvec;
 		struct xarray *xarray;
@@ -68,6 +69,8 @@ struct iov_iter {
 	};
 };
 
+#define iter_iov(iter)	(iter)->__iov
+
 static inline enum iter_type iov_iter_type(const struct iov_iter *i)
 {
 	return i->iter_type;
@@ -146,9 +149,9 @@ static inline size_t iov_length(const struct iovec *iov, unsigned long nr_segs)
 static inline struct iovec iov_iter_iovec(const struct iov_iter *iter)
 {
 	return (struct iovec) {
-		.iov_base = iter->iov->iov_base + iter->iov_offset,
+		.iov_base = iter_iov(iter)->iov_base + iter->iov_offset,
 		.iov_len = min(iter->count,
-			       iter->iov->iov_len - iter->iov_offset),
+			       iter_iov(iter)->iov_len - iter->iov_offset),
 	};
 }
 
diff --git a/io_uring/net.c b/io_uring/net.c
index 4040cf093318..89e839013837 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -184,8 +184,8 @@ static int io_setup_async_msg(struct io_kiocb *req,
 		async_msg->msg.msg_name = &async_msg->addr;
 	/* if were using fast_iov, set it to the new one */
 	if (iter_is_iovec(&kmsg->msg.msg_iter) && !kmsg->free_iov) {
-		size_t fast_idx = kmsg->msg.msg_iter.iov - kmsg->fast_iov;
-		async_msg->msg.msg_iter.iov = &async_msg->fast_iov[fast_idx];
+		size_t fast_idx = iter_iov(&kmsg->msg.msg_iter) - kmsg->fast_iov;
+		async_msg->msg.msg_iter.__iov = &async_msg->fast_iov[fast_idx];
 	}
 
 	return -EAGAIN;
diff --git a/io_uring/rw.c b/io_uring/rw.c
index 4c233910e200..7573a34ea42a 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -503,10 +503,10 @@ static void io_req_map_rw(struct io_kiocb *req, const struct iovec *iovec,
 	if (!iovec) {
 		unsigned iov_off = 0;
 
-		io->s.iter.iov = io->s.fast_iov;
-		if (iter->iov != fast_iov) {
-			iov_off = iter->iov - fast_iov;
-			io->s.iter.iov += iov_off;
+		io->s.iter.__iov = io->s.fast_iov;
+		if (iter->__iov != fast_iov) {
+			iov_off = iter_iov(iter) - fast_iov;
+			io->s.iter.__iov += iov_off;
 		}
 		if (io->s.fast_iov != fast_iov)
 			memcpy(io->s.fast_iov + iov_off, fast_iov + iov_off,
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 274014e4eafe..87488c4aad3f 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -126,13 +126,13 @@ __out:								\
 			iterate_buf(i, n, base, len, off,	\
 						i->ubuf, (I)) 	\
 		} else if (likely(iter_is_iovec(i))) {		\
-			const struct iovec *iov = i->iov;	\
+			const struct iovec *iov = iter_iov(i);	\
 			void __user *base;			\
 			size_t len;				\
 			iterate_iovec(i, n, base, len, off,	\
 						iov, (I))	\
-			i->nr_segs -= iov - i->iov;		\
-			i->iov = iov;				\
+			i->nr_segs -= iov - iter_iov(i);	\
+			i->__iov = iov;				\
 		} else if (iov_iter_is_bvec(i)) {		\
 			const struct bio_vec *bvec = i->bvec;	\
 			void *base;				\
@@ -355,7 +355,7 @@ size_t fault_in_iov_iter_readable(const struct iov_iter *i, size_t size)
 		size_t skip;
 
 		size -= count;
-		for (p = i->iov, skip = i->iov_offset; count; p++, skip = 0) {
+		for (p = iter_iov(i), skip = i->iov_offset; count; p++, skip = 0) {
 			size_t len = min(count, p->iov_len - skip);
 			size_t ret;
 
@@ -398,7 +398,7 @@ size_t fault_in_iov_iter_writeable(const struct iov_iter *i, size_t size)
 		size_t skip;
 
 		size -= count;
-		for (p = i->iov, skip = i->iov_offset; count; p++, skip = 0) {
+		for (p = iter_iov(i), skip = i->iov_offset; count; p++, skip = 0) {
 			size_t len = min(count, p->iov_len - skip);
 			size_t ret;
 
@@ -425,7 +425,7 @@ void iov_iter_init(struct iov_iter *i, unsigned int direction,
 		.nofault = false,
 		.user_backed = true,
 		.data_source = direction,
-		.iov = iov,
+		.__iov = iov,
 		.nr_segs = nr_segs,
 		.iov_offset = 0,
 		.count = count
@@ -876,14 +876,14 @@ static void iov_iter_iovec_advance(struct iov_iter *i, size_t size)
 	i->count -= size;
 
 	size += i->iov_offset; // from beginning of current segment
-	for (iov = i->iov, end = iov + i->nr_segs; iov < end; iov++) {
+	for (iov = iter_iov(i), end = iov + i->nr_segs; iov < end; iov++) {
 		if (likely(size < iov->iov_len))
 			break;
 		size -= iov->iov_len;
 	}
 	i->iov_offset = size;
-	i->nr_segs -= iov - i->iov;
-	i->iov = iov;
+	i->nr_segs -= iov - iter_iov(i);
+	i->__iov = iov;
 }
 
 void iov_iter_advance(struct iov_iter *i, size_t size)
@@ -958,12 +958,12 @@ void iov_iter_revert(struct iov_iter *i, size_t unroll)
 			unroll -= n;
 		}
 	} else { /* same logics for iovec and kvec */
-		const struct iovec *iov = i->iov;
+		const struct iovec *iov = iter_iov(i);
 		while (1) {
 			size_t n = (--iov)->iov_len;
 			i->nr_segs++;
 			if (unroll <= n) {
-				i->iov = iov;
+				i->__iov = iov;
 				i->iov_offset = n - unroll;
 				return;
 			}
@@ -980,7 +980,7 @@ size_t iov_iter_single_seg_count(const struct iov_iter *i)
 {
 	if (i->nr_segs > 1) {
 		if (likely(iter_is_iovec(i) || iov_iter_is_kvec(i)))
-			return min(i->count, i->iov->iov_len - i->iov_offset);
+			return min(i->count, iter_iov(i)->iov_len - i->iov_offset);
 		if (iov_iter_is_bvec(i))
 			return min(i->count, i->bvec->bv_len - i->iov_offset);
 	}
@@ -1095,13 +1095,14 @@ static bool iov_iter_aligned_iovec(const struct iov_iter *i, unsigned addr_mask,
 	unsigned k;
 
 	for (k = 0; k < i->nr_segs; k++, skip = 0) {
-		size_t len = i->iov[k].iov_len - skip;
+		const struct iovec *iov = iter_iov(i) + k;
+		size_t len = iov->iov_len - skip;
 
 		if (len > size)
 			len = size;
 		if (len & len_mask)
 			return false;
-		if ((unsigned long)(i->iov[k].iov_base + skip) & addr_mask)
+		if ((unsigned long)(iov->iov_base + skip) & addr_mask)
 			return false;
 
 		size -= len;
@@ -1194,9 +1195,10 @@ static unsigned long iov_iter_alignment_iovec(const struct iov_iter *i)
 	unsigned k;
 
 	for (k = 0; k < i->nr_segs; k++, skip = 0) {
-		size_t len = i->iov[k].iov_len - skip;
+		const struct iovec *iov = iter_iov(i) + k;
+		size_t len = iov->iov_len - skip;
 		if (len) {
-			res |= (unsigned long)i->iov[k].iov_base + skip;
+			res |= (unsigned long)iov->iov_base + skip;
 			if (len > size)
 				len = size;
 			res |= len;
@@ -1273,14 +1275,15 @@ unsigned long iov_iter_gap_alignment(const struct iov_iter *i)
 		return ~0U;
 
 	for (k = 0; k < i->nr_segs; k++) {
-		if (i->iov[k].iov_len) {
-			unsigned long base = (unsigned long)i->iov[k].iov_base;
+		const struct iovec *iov = iter_iov(i) + k;
+		if (iov->iov_len) {
+			unsigned long base = (unsigned long)iov->iov_base;
 			if (v) // if not the first one
 				res |= base | v; // this start | previous end
-			v = base + i->iov[k].iov_len;
-			if (size <= i->iov[k].iov_len)
+			v = base + iov->iov_len;
+			if (size <= iov->iov_len)
 				break;
-			size -= i->iov[k].iov_len;
+			size -= iov->iov_len;
 		}
 	}
 	return res;
@@ -1396,13 +1399,14 @@ static unsigned long first_iovec_segment(const struct iov_iter *i, size_t *size)
 		return (unsigned long)i->ubuf + i->iov_offset;
 
 	for (k = 0, skip = i->iov_offset; k < i->nr_segs; k++, skip = 0) {
-		size_t len = i->iov[k].iov_len - skip;
+		const struct iovec *iov = iter_iov(i) + k;
+		size_t len = iov->iov_len - skip;
 
 		if (unlikely(!len))
 			continue;
 		if (*size > len)
 			*size = len;
-		return (unsigned long)i->iov[k].iov_base + skip;
+		return (unsigned long)iov->iov_base + skip;
 	}
 	BUG(); // if it had been empty, we wouldn't get called
 }
@@ -1614,7 +1618,7 @@ static int iov_npages(const struct iov_iter *i, int maxpages)
 	const struct iovec *p;
 	int npages = 0;
 
-	for (p = i->iov; size; skip = 0, p++) {
+	for (p = iter_iov(i); size; skip = 0, p++) {
 		unsigned offs = offset_in_page(p->iov_base + skip);
 		size_t len = min(p->iov_len - skip, size);
 
@@ -1691,7 +1695,7 @@ const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags)
 				    flags);
 	else if (iov_iter_is_kvec(new) || iter_is_iovec(new))
 		/* iovec and kvec have identical layout */
-		return new->iov = kmemdup(new->iov,
+		return new->__iov = kmemdup(new->__iov,
 				   new->nr_segs * sizeof(struct iovec),
 				   flags);
 	return NULL;
@@ -1918,7 +1922,7 @@ void iov_iter_restore(struct iov_iter *i, struct iov_iter_state *state)
 	if (iov_iter_is_bvec(i))
 		i->bvec -= state->nr_segs - i->nr_segs;
 	else
-		i->iov -= state->nr_segs - i->nr_segs;
+		i->__iov -= state->nr_segs - i->nr_segs;
 	i->nr_segs = state->nr_segs;
 }
 
diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
index 331380c2438b..8bb97ee6720d 100644
--- a/sound/core/pcm_native.c
+++ b/sound/core/pcm_native.c
@@ -3521,6 +3521,7 @@ static ssize_t snd_pcm_readv(struct kiocb *iocb, struct iov_iter *to)
 	unsigned long i;
 	void __user **bufs;
 	snd_pcm_uframes_t frames;
+	const struct iovec *iov = iter_iov(to);
 
 	pcm_file = iocb->ki_filp->private_data;
 	substream = pcm_file->substream;
@@ -3534,14 +3535,16 @@ static ssize_t snd_pcm_readv(struct kiocb *iocb, struct iov_iter *to)
 		return -EINVAL;
 	if (to->nr_segs > 1024 || to->nr_segs != runtime->channels)
 		return -EINVAL;
-	if (!frame_aligned(runtime, to->iov->iov_len))
+	if (!frame_aligned(runtime, iov->iov_len))
 		return -EINVAL;
-	frames = bytes_to_samples(runtime, to->iov->iov_len);
+	frames = bytes_to_samples(runtime, iov->iov_len);
 	bufs = kmalloc_array(to->nr_segs, sizeof(void *), GFP_KERNEL);
 	if (bufs == NULL)
 		return -ENOMEM;
-	for (i = 0; i < to->nr_segs; ++i)
-		bufs[i] = to->iov[i].iov_base;
+	for (i = 0; i < to->nr_segs; ++i) {
+		bufs[i] = iov->iov_base;
+		iov++;
+	}
 	result = snd_pcm_lib_readv(substream, bufs, frames);
 	if (result > 0)
 		result = frames_to_bytes(runtime, result);
@@ -3558,6 +3561,7 @@ static ssize_t snd_pcm_writev(struct kiocb *iocb, struct iov_iter *from)
 	unsigned long i;
 	void __user **bufs;
 	snd_pcm_uframes_t frames;
+	const struct iovec *iov = iter_iov(from);
 
 	pcm_file = iocb->ki_filp->private_data;
 	substream = pcm_file->substream;
@@ -3570,14 +3574,16 @@ static ssize_t snd_pcm_writev(struct kiocb *iocb, struct iov_iter *from)
 	if (!iter_is_iovec(from))
 		return -EINVAL;
 	if (from->nr_segs > 128 || from->nr_segs != runtime->channels ||
-	    !frame_aligned(runtime, from->iov->iov_len))
+	    !frame_aligned(runtime, iov->iov_len))
 		return -EINVAL;
-	frames = bytes_to_samples(runtime, from->iov->iov_len);
+	frames = bytes_to_samples(runtime, iov->iov_len);
 	bufs = kmalloc_array(from->nr_segs, sizeof(void *), GFP_KERNEL);
 	if (bufs == NULL)
 		return -ENOMEM;
-	for (i = 0; i < from->nr_segs; ++i)
-		bufs[i] = from->iov[i].iov_base;
+	for (i = 0; i < from->nr_segs; ++i) {
+		bufs[i] = iov->iov_base;
+		iov++;
+	}
 	result = snd_pcm_lib_writev(substream, bufs, frames);
 	if (result > 0)
 		result = frames_to_bytes(runtime, result);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 03/11] IB/hfi1: check for user backed iterator, not specific iterator type
  2023-03-30 16:46 [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF Jens Axboe
  2023-03-30 16:46 ` [PATCH 01/11] block: ensure bio_alloc_map_data() deals with ITER_UBUF correctly Jens Axboe
  2023-03-30 16:46 ` [PATCH 02/11] iov_iter: add iter_iovec() helper Jens Axboe
@ 2023-03-30 16:46 ` Jens Axboe
  2023-03-30 16:46 ` [PATCH 04/11] IB/qib: " Jens Axboe
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Jens Axboe @ 2023-03-30 16:46 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: torvalds, brauner, viro, Jens Axboe

In preparation for switching single segment iterators to using ITER_UBUF,
swap the check for whether we are user backed or not. While at it, move
it outside the srcu locking area to clean up the code a bit.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 drivers/infiniband/hw/hfi1/file_ops.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
index 3065db9d6bb9..f3d6ce45c397 100644
--- a/drivers/infiniband/hw/hfi1/file_ops.c
+++ b/drivers/infiniband/hw/hfi1/file_ops.c
@@ -267,6 +267,8 @@ static ssize_t hfi1_write_iter(struct kiocb *kiocb, struct iov_iter *from)
 
 	if (!HFI1_CAP_IS_KSET(SDMA))
 		return -EINVAL;
+	if (!from->user_backed)
+		return -EINVAL;
 	idx = srcu_read_lock(&fd->pq_srcu);
 	pq = srcu_dereference(fd->pq, &fd->pq_srcu);
 	if (!cq || !pq) {
@@ -274,11 +276,6 @@ static ssize_t hfi1_write_iter(struct kiocb *kiocb, struct iov_iter *from)
 		return -EIO;
 	}
 
-	if (!iter_is_iovec(from) || !dim) {
-		srcu_read_unlock(&fd->pq_srcu, idx);
-		return -EINVAL;
-	}
-
 	trace_hfi1_sdma_request(fd->dd, fd->uctxt->ctxt, fd->subctxt, dim);
 
 	if (atomic_read(&pq->n_reqs) == pq->n_max_reqs) {
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 04/11] IB/qib: check for user backed iterator, not specific iterator type
  2023-03-30 16:46 [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF Jens Axboe
                   ` (2 preceding siblings ...)
  2023-03-30 16:46 ` [PATCH 03/11] IB/hfi1: check for user backed iterator, not specific iterator type Jens Axboe
@ 2023-03-30 16:46 ` Jens Axboe
  2023-03-30 16:46 ` [PATCH 05/11] ALSA: pcm: " Jens Axboe
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Jens Axboe @ 2023-03-30 16:46 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: torvalds, brauner, viro, Jens Axboe

In preparation for switching single segment iterators to using ITER_UBUF,
swap the check for whether we are user backed or not.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 drivers/infiniband/hw/qib/qib_file_ops.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/infiniband/hw/qib/qib_file_ops.c b/drivers/infiniband/hw/qib/qib_file_ops.c
index 4cee39337866..815ea72ad473 100644
--- a/drivers/infiniband/hw/qib/qib_file_ops.c
+++ b/drivers/infiniband/hw/qib/qib_file_ops.c
@@ -2245,7 +2245,7 @@ static ssize_t qib_write_iter(struct kiocb *iocb, struct iov_iter *from)
 	struct qib_ctxtdata *rcd = ctxt_fp(iocb->ki_filp);
 	struct qib_user_sdma_queue *pq = fp->pq;
 
-	if (!iter_is_iovec(from) || !from->nr_segs || !pq)
+	if (!from->user_backed || !from->nr_segs || !pq)
 		return -EINVAL;
 
 	return qib_user_sdma_writev(rcd, pq, iter_iov(from), from->nr_segs);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 05/11] ALSA: pcm: check for user backed iterator, not specific iterator type
  2023-03-30 16:46 [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF Jens Axboe
                   ` (3 preceding siblings ...)
  2023-03-30 16:46 ` [PATCH 04/11] IB/qib: " Jens Axboe
@ 2023-03-30 16:46 ` Jens Axboe
  2023-03-30 16:46 ` [PATCH 06/11] iov_iter: add iter_iov_addr() and iter_iov_len() helpers Jens Axboe
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Jens Axboe @ 2023-03-30 16:46 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: torvalds, brauner, viro, Jens Axboe

In preparation for switching single segment iterators to using ITER_UBUF,
swap the check for whether we are user backed or not.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 sound/core/pcm_native.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
index 8bb97ee6720d..5868661d461b 100644
--- a/sound/core/pcm_native.c
+++ b/sound/core/pcm_native.c
@@ -3531,7 +3531,7 @@ static ssize_t snd_pcm_readv(struct kiocb *iocb, struct iov_iter *to)
 	if (runtime->state == SNDRV_PCM_STATE_OPEN ||
 	    runtime->state == SNDRV_PCM_STATE_DISCONNECTED)
 		return -EBADFD;
-	if (!iter_is_iovec(to))
+	if (!to->user_backed)
 		return -EINVAL;
 	if (to->nr_segs > 1024 || to->nr_segs != runtime->channels)
 		return -EINVAL;
@@ -3571,7 +3571,7 @@ static ssize_t snd_pcm_writev(struct kiocb *iocb, struct iov_iter *from)
 	if (runtime->state == SNDRV_PCM_STATE_OPEN ||
 	    runtime->state == SNDRV_PCM_STATE_DISCONNECTED)
 		return -EBADFD;
-	if (!iter_is_iovec(from))
+	if (!from->user_backed)
 		return -EINVAL;
 	if (from->nr_segs > 128 || from->nr_segs != runtime->channels ||
 	    !frame_aligned(runtime, iov->iov_len))
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 06/11] iov_iter: add iter_iov_addr() and iter_iov_len() helpers
  2023-03-30 16:46 [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF Jens Axboe
                   ` (4 preceding siblings ...)
  2023-03-30 16:46 ` [PATCH 05/11] ALSA: pcm: " Jens Axboe
@ 2023-03-30 16:46 ` Jens Axboe
  2023-03-30 16:46 ` [PATCH 07/11] iov_iter: remove iov_iter_iovec() Jens Axboe
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Jens Axboe @ 2023-03-30 16:46 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: torvalds, brauner, viro, Jens Axboe

These just return the address and length of the current iovec segment
in the iterator. Convert existing iov_iter_iovec() users to use them
instead of getting a copy of the current vec.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/read_write.c     | 11 +++++------
 include/linux/uio.h |  2 ++
 io_uring/rw.c       | 27 +++++++++++++--------------
 mm/madvise.c        |  9 ++++-----
 4 files changed, 24 insertions(+), 25 deletions(-)

diff --git a/fs/read_write.c b/fs/read_write.c
index 7a2ff6157eda..a21ba3be7dbe 100644
--- a/fs/read_write.c
+++ b/fs/read_write.c
@@ -749,15 +749,14 @@ static ssize_t do_loop_readv_writev(struct file *filp, struct iov_iter *iter,
 		return -EOPNOTSUPP;
 
 	while (iov_iter_count(iter)) {
-		struct iovec iovec = iov_iter_iovec(iter);
 		ssize_t nr;
 
 		if (type == READ) {
-			nr = filp->f_op->read(filp, iovec.iov_base,
-					      iovec.iov_len, ppos);
+			nr = filp->f_op->read(filp, iter_iov_addr(iter),
+						iter_iov_len(iter), ppos);
 		} else {
-			nr = filp->f_op->write(filp, iovec.iov_base,
-					       iovec.iov_len, ppos);
+			nr = filp->f_op->write(filp, iter_iov_addr(iter),
+						iter_iov_len(iter), ppos);
 		}
 
 		if (nr < 0) {
@@ -766,7 +765,7 @@ static ssize_t do_loop_readv_writev(struct file *filp, struct iov_iter *iter,
 			break;
 		}
 		ret += nr;
-		if (nr != iovec.iov_len)
+		if (nr != iter_iov_len(iter))
 			break;
 		iov_iter_advance(iter, nr);
 	}
diff --git a/include/linux/uio.h b/include/linux/uio.h
index 4218624b7f78..b7fce87b720e 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -70,6 +70,8 @@ struct iov_iter {
 };
 
 #define iter_iov(iter)	(iter)->__iov
+#define iter_iov_addr(iter)	(iter_iov(iter)->iov_base + (iter)->iov_offset)
+#define iter_iov_len(iter)	(iter_iov(iter)->iov_len - (iter)->iov_offset)
 
 static inline enum iter_type iov_iter_type(const struct iov_iter *i)
 {
diff --git a/io_uring/rw.c b/io_uring/rw.c
index 7573a34ea42a..f33ba6f28247 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -447,26 +447,25 @@ static ssize_t loop_rw_iter(int ddir, struct io_rw *rw, struct iov_iter *iter)
 	ppos = io_kiocb_ppos(kiocb);
 
 	while (iov_iter_count(iter)) {
-		struct iovec iovec;
+		void __user *addr;
+		size_t len;
 		ssize_t nr;
 
 		if (iter_is_ubuf(iter)) {
-			iovec.iov_base = iter->ubuf + iter->iov_offset;
-			iovec.iov_len = iov_iter_count(iter);
+			addr = iter->ubuf + iter->iov_offset;
+			len = iov_iter_count(iter);
 		} else if (!iov_iter_is_bvec(iter)) {
-			iovec = iov_iter_iovec(iter);
+			addr = iter_iov_addr(iter);
+			len = iter_iov_len(iter);
 		} else {
-			iovec.iov_base = u64_to_user_ptr(rw->addr);
-			iovec.iov_len = rw->len;
+			addr = u64_to_user_ptr(rw->addr);
+			len = rw->len;
 		}
 
-		if (ddir == READ) {
-			nr = file->f_op->read(file, iovec.iov_base,
-					      iovec.iov_len, ppos);
-		} else {
-			nr = file->f_op->write(file, iovec.iov_base,
-					       iovec.iov_len, ppos);
-		}
+		if (ddir == READ)
+			nr = file->f_op->read(file, addr, len, ppos);
+		else
+			nr = file->f_op->write(file, addr, len, ppos);
 
 		if (nr < 0) {
 			if (!ret)
@@ -482,7 +481,7 @@ static ssize_t loop_rw_iter(int ddir, struct io_rw *rw, struct iov_iter *iter)
 			if (!rw->len)
 				break;
 		}
-		if (nr != iovec.iov_len)
+		if (nr != len)
 			break;
 	}
 
diff --git a/mm/madvise.c b/mm/madvise.c
index 340125d08c03..9f389c5304d2 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -1456,7 +1456,7 @@ SYSCALL_DEFINE5(process_madvise, int, pidfd, const struct iovec __user *, vec,
 		size_t, vlen, int, behavior, unsigned int, flags)
 {
 	ssize_t ret;
-	struct iovec iovstack[UIO_FASTIOV], iovec;
+	struct iovec iovstack[UIO_FASTIOV];
 	struct iovec *iov = iovstack;
 	struct iov_iter iter;
 	struct task_struct *task;
@@ -1503,12 +1503,11 @@ SYSCALL_DEFINE5(process_madvise, int, pidfd, const struct iovec __user *, vec,
 	total_len = iov_iter_count(&iter);
 
 	while (iov_iter_count(&iter)) {
-		iovec = iov_iter_iovec(&iter);
-		ret = do_madvise(mm, (unsigned long)iovec.iov_base,
-					iovec.iov_len, behavior);
+		ret = do_madvise(mm, (unsigned long)iter_iov_addr(&iter),
+					iter_iov_len(&iter), behavior);
 		if (ret < 0)
 			break;
-		iov_iter_advance(&iter, iovec.iov_len);
+		iov_iter_advance(&iter, iter_iov_len(&iter));
 	}
 
 	ret = (total_len - iov_iter_count(&iter)) ? : ret;
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 07/11] iov_iter: remove iov_iter_iovec()
  2023-03-30 16:46 [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF Jens Axboe
                   ` (5 preceding siblings ...)
  2023-03-30 16:46 ` [PATCH 06/11] iov_iter: add iter_iov_addr() and iter_iov_len() helpers Jens Axboe
@ 2023-03-30 16:46 ` Jens Axboe
  2023-03-30 16:46 ` [PATCH 08/11] iov_iter: set nr_segs = 1 for ITER_UBUF Jens Axboe
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Jens Axboe @ 2023-03-30 16:46 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: torvalds, brauner, viro, Jens Axboe

No more users are left of this function.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 include/linux/uio.h | 9 ---------
 1 file changed, 9 deletions(-)

diff --git a/include/linux/uio.h b/include/linux/uio.h
index b7fce87b720e..7f585ceedcb2 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -148,15 +148,6 @@ static inline size_t iov_length(const struct iovec *iov, unsigned long nr_segs)
 	return ret;
 }
 
-static inline struct iovec iov_iter_iovec(const struct iov_iter *iter)
-{
-	return (struct iovec) {
-		.iov_base = iter_iov(iter)->iov_base + iter->iov_offset,
-		.iov_len = min(iter->count,
-			       iter_iov(iter)->iov_len - iter->iov_offset),
-	};
-}
-
 size_t copy_page_from_iter_atomic(struct page *page, unsigned offset,
 				  size_t bytes, struct iov_iter *i);
 void iov_iter_advance(struct iov_iter *i, size_t bytes);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 08/11] iov_iter: set nr_segs = 1 for ITER_UBUF
  2023-03-30 16:46 [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF Jens Axboe
                   ` (6 preceding siblings ...)
  2023-03-30 16:46 ` [PATCH 07/11] iov_iter: remove iov_iter_iovec() Jens Axboe
@ 2023-03-30 16:46 ` Jens Axboe
  2023-03-30 16:47 ` [PATCH 09/11] iov_iter: overlay struct iovec and ubuf/len Jens Axboe
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Jens Axboe @ 2023-03-30 16:46 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: torvalds, brauner, viro, Jens Axboe

To avoid needing to check if a given user backed iov_iter is of type
ITER_IOVEC or ITER_UBUF, set the number of segments for the ITER_UBUF
case to 1 as we're carrying a single segment.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 include/linux/uio.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/linux/uio.h b/include/linux/uio.h
index 7f585ceedcb2..5dbd2dcab35c 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -355,7 +355,8 @@ static inline void iov_iter_ubuf(struct iov_iter *i, unsigned int direction,
 		.user_backed = true,
 		.data_source = direction,
 		.ubuf = buf,
-		.count = count
+		.count = count,
+		.nr_segs = 1
 	};
 }
 /* Flags for iov_iter_get/extract_pages*() */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 09/11] iov_iter: overlay struct iovec and ubuf/len
  2023-03-30 16:46 [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF Jens Axboe
                   ` (7 preceding siblings ...)
  2023-03-30 16:46 ` [PATCH 08/11] iov_iter: set nr_segs = 1 for ITER_UBUF Jens Axboe
@ 2023-03-30 16:47 ` Jens Axboe
  2023-03-30 16:47 ` [PATCH 10/11] iov_iter: convert import_single_range() to ITER_UBUF Jens Axboe
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Jens Axboe @ 2023-03-30 16:47 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: torvalds, brauner, viro, Jens Axboe

Add an internal struct iovec that we can return as a pointer, with the
fields of the iovec overlapping with the ITER_UBUF ubuf and length
fields.

Then we can have iter_iov() check for the appropriate type, and return
&iter->__ubuf_iovec for ITER_UBUF and iter->__iov for ITER_IOVEC and
things will magically work out for a single segment request regardless
of either type.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 include/linux/uio.h | 44 +++++++++++++++++++++++++++++++++++---------
 1 file changed, 35 insertions(+), 9 deletions(-)

diff --git a/include/linux/uio.h b/include/linux/uio.h
index 5dbd2dcab35c..ed35f4427a0a 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -49,15 +49,35 @@ struct iov_iter {
 		size_t iov_offset;
 		int last_offset;
 	};
-	size_t count;
+	/*
+	 * Hack alert: overlay ubuf_iovec with iovec + count, so
+	 * that the members resolve correctly regardless of the type
+	 * of iterator used. This means that you can use:
+	 *
+	 * &iter->__ubuf_iovec or iter->__iov
+	 *
+	 * interchangably for the user_backed cases, hence simplifying
+	 * some of the cases that need to deal with both.
+	 */
 	union {
-		/* use iter_iov() to get the current vec */
-		const struct iovec *__iov;
-		const struct kvec *kvec;
-		const struct bio_vec *bvec;
-		struct xarray *xarray;
-		struct pipe_inode_info *pipe;
-		void __user *ubuf;
+		/*
+		 * This really should be a const, but we cannot do that without
+		 * also modifying any of the zero-filling iter init functions.
+		 * Leave it non-const for now, but it should be treated as such.
+		 */
+		struct iovec __ubuf_iovec;
+		struct {
+			union {
+				/* use iter_iov() to get the current vec */
+				const struct iovec *__iov;
+				const struct kvec *kvec;
+				const struct bio_vec *bvec;
+				struct xarray *xarray;
+				struct pipe_inode_info *pipe;
+				void __user *ubuf;
+			};
+			size_t count;
+		};
 	};
 	union {
 		unsigned long nr_segs;
@@ -69,7 +89,13 @@ struct iov_iter {
 	};
 };
 
-#define iter_iov(iter)	(iter)->__iov
+static inline const struct iovec *iter_iov(const struct iov_iter *iter)
+{
+	if (iter->iter_type == ITER_UBUF)
+		return (const struct iovec *) &iter->__ubuf_iovec;
+	return iter->__iov;
+}
+
 #define iter_iov_addr(iter)	(iter_iov(iter)->iov_base + (iter)->iov_offset)
 #define iter_iov_len(iter)	(iter_iov(iter)->iov_len - (iter)->iov_offset)
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 10/11] iov_iter: convert import_single_range() to ITER_UBUF
  2023-03-30 16:46 [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF Jens Axboe
                   ` (8 preceding siblings ...)
  2023-03-30 16:47 ` [PATCH 09/11] iov_iter: overlay struct iovec and ubuf/len Jens Axboe
@ 2023-03-30 16:47 ` Jens Axboe
  2023-03-30 16:47 ` [PATCH 11/11] iov_iter: import single vector iovecs as ITER_UBUF Jens Axboe
  2023-03-30 17:11 ` [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF Linus Torvalds
  11 siblings, 0 replies; 18+ messages in thread
From: Jens Axboe @ 2023-03-30 16:47 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: torvalds, brauner, viro, Jens Axboe

Since we're just importing a single vector, we don't have to turn it
into an ITER_IOVEC. Instead turn it into an ITER_UBUF, which is cheaper
to iterate.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 lib/iov_iter.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 87488c4aad3f..f411bda1171f 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -1870,9 +1870,7 @@ int import_single_range(int rw, void __user *buf, size_t len,
 	if (unlikely(!access_ok(buf, len)))
 		return -EFAULT;
 
-	iov->iov_base = buf;
-	iov->iov_len = len;
-	iov_iter_init(i, rw, iov, 1, len);
+	iov_iter_ubuf(i, rw, buf, len);
 	return 0;
 }
 EXPORT_SYMBOL(import_single_range);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 11/11] iov_iter: import single vector iovecs as ITER_UBUF
  2023-03-30 16:46 [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF Jens Axboe
                   ` (9 preceding siblings ...)
  2023-03-30 16:47 ` [PATCH 10/11] iov_iter: convert import_single_range() to ITER_UBUF Jens Axboe
@ 2023-03-30 16:47 ` Jens Axboe
  2023-03-30 17:11 ` [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF Linus Torvalds
  11 siblings, 0 replies; 18+ messages in thread
From: Jens Axboe @ 2023-03-30 16:47 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: torvalds, brauner, viro, Jens Axboe

Add a special case to __import_iovec(), which imports a single segment
iovec as an ITER_UBUF rather than an ITER_IOVEC. ITER_UBUF is cheaper
to iterate than ITER_IOVEC, and for a single segment iovec, there's no
point in using a segmented iterator.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 lib/iov_iter.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index f411bda1171f..3e6c9bcfa612 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -1784,6 +1784,30 @@ struct iovec *iovec_from_user(const struct iovec __user *uvec,
 	return iov;
 }
 
+/*
+ * Single segment iovec supplied by the user, import it as ITER_UBUF.
+ */
+static ssize_t __import_iovec_ubuf(int type, const struct iovec __user *uvec,
+				   struct iovec **iovp, struct iov_iter *i,
+				   bool compat)
+{
+	struct iovec *iov = *iovp;
+	ssize_t ret;
+
+	if (compat)
+		ret = copy_compat_iovec_from_user(iov, uvec, 1);
+	else
+		ret = copy_iovec_from_user(iov, uvec, 1);
+	if (unlikely(ret))
+		return ret;
+
+	ret = import_ubuf(type, iov->iov_base, iov->iov_len, i);
+	if (unlikely(ret))
+		return ret;
+	*iovp = NULL;
+	return i->count;
+}
+
 ssize_t __import_iovec(int type, const struct iovec __user *uvec,
 		 unsigned nr_segs, unsigned fast_segs, struct iovec **iovp,
 		 struct iov_iter *i, bool compat)
@@ -1792,6 +1816,9 @@ ssize_t __import_iovec(int type, const struct iovec __user *uvec,
 	unsigned long seg;
 	struct iovec *iov;
 
+	if (nr_segs == 1)
+		return __import_iovec_ubuf(type, uvec, iovp, i, compat);
+
 	iov = iovec_from_user(uvec, nr_segs, fast_segs, *iovp, compat);
 	if (IS_ERR(iov)) {
 		*iovp = NULL;
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF
  2023-03-30 16:46 [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF Jens Axboe
                   ` (10 preceding siblings ...)
  2023-03-30 16:47 ` [PATCH 11/11] iov_iter: import single vector iovecs as ITER_UBUF Jens Axboe
@ 2023-03-30 17:11 ` Linus Torvalds
  2023-03-30 17:33   ` Jens Axboe
  11 siblings, 1 reply; 18+ messages in thread
From: Linus Torvalds @ 2023-03-30 17:11 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-fsdevel, brauner, viro

On Thu, Mar 30, 2023 at 9:47 AM Jens Axboe <axboe@kernel.dk> wrote:
>
> Sadly, in absolute numbers, comparing read(2) and readv(2),
> the latter takes 2.11x as long in the stock kernel, and 2.01x as long
> with the patches. So while single segment is better now than before,
> it's still waaaay slower than having to copy in a single iovec. Testing
> was run with all security mitigations off.

What does the profile say? Iis it all in import_iovec() or what?

I do note that we have some completely horrid "helper" functions in
the iter paths: things like "do_iter_readv_writev()" supposedly being
a common function , but then it ends up doing some small setup and
just doing a conditional on the "type" after all, so when it isn't
inlined, you have those things that don't predict well at all.

And the iter interfaces don't have just that iterator, they have the
whole kiocb overhead too. All in the name of being generic. Most file
descriptors don't even support the simpler ".read" interface, because
they want the whole thing with IOCB_DIRECT flags etc.

So to some degree it's unfair to compare read-vs-read_iter. The latter
has all that disgusting support for O_DIRECT and friends, and testing
with /dev/null just doesn't show that part.

             Linus

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF
  2023-03-30 17:11 ` [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF Linus Torvalds
@ 2023-03-30 17:33   ` Jens Axboe
  2023-03-30 21:53     ` Linus Torvalds
  0 siblings, 1 reply; 18+ messages in thread
From: Jens Axboe @ 2023-03-30 17:33 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-fsdevel, brauner, viro

On 3/30/23 11:11?AM, Linus Torvalds wrote:
> On Thu, Mar 30, 2023 at 9:47?AM Jens Axboe <axboe@kernel.dk> wrote:
>>
>> Sadly, in absolute numbers, comparing read(2) and readv(2),
>> the latter takes 2.11x as long in the stock kernel, and 2.01x as long
>> with the patches. So while single segment is better now than before,
>> it's still waaaay slower than having to copy in a single iovec. Testing
>> was run with all security mitigations off.
> 
> What does the profile say? Iis it all in import_iovec() or what?
> 
> I do note that we have some completely horrid "helper" functions in
> the iter paths: things like "do_iter_readv_writev()" supposedly being
> a common function , but then it ends up doing some small setup and
> just doing a conditional on the "type" after all, so when it isn't
> inlined, you have those things that don't predict well at all.
> 
> And the iter interfaces don't have just that iterator, they have the
> whole kiocb overhead too. All in the name of being generic. Most file
> descriptors don't even support the simpler ".read" interface, because
> they want the whole thing with IOCB_DIRECT flags etc.
> 
> So to some degree it's unfair to compare read-vs-read_iter. The latter
> has all that disgusting support for O_DIRECT and friends, and testing
> with /dev/null just doesn't show that part.

Oh I agree, and particularly for the "read from /dev/zero" case it's
not very interesting, as it does too different things there as well.
It was just more of a "gah it's potentially this bad" outburst than
anything else, the numbers I did care about was readv before and
after patches, not read vs readv.

That said, there might be things to improve here. But that's a task
for another time. perf diff of a read vs readv run below.


# Event 'cycles'
#
# Baseline  Delta Abs  Shared Object         Symbol                               
# ........  .........  ....................  .....................................
#
              +40.56%  [kernel.vmlinux]      [k] iov_iter_zero
              +12.59%  [kernel.vmlinux]      [k] copy_user_enhanced_fast_string
    21.56%    -11.10%  [kernel.vmlinux]      [k] entry_SYSCALL_64
               +7.67%  [kernel.vmlinux]      [k] _copy_from_user
               +7.40%  libc.so.6             [.] __GI___readv
     3.76%     -2.22%  [kernel.vmlinux]      [k] __fsnotify_parent
               +2.13%  [kernel.vmlinux]      [k] do_iter_read
               +2.02%  [kernel.vmlinux]      [k] do_iter_readv_writev
               +1.89%  [kernel.vmlinux]      [k] __import_iovec
               +1.59%  [kernel.vmlinux]      [k] do_readv
     3.15%     -1.43%  [kernel.vmlinux]      [k] __fget_light
               +1.42%  [kernel.vmlinux]      [k] vfs_readv
               +1.32%  [kernel.vmlinux]      [k] read_iter_zero
     2.39%     -1.30%  [kernel.vmlinux]      [k] syscall_exit_to_user_mode
     1.89%     -1.17%  [kernel.vmlinux]      [k] exit_to_user_mode_prepare
     2.01%     -1.10%  [kernel.vmlinux]      [k] do_syscall_64
     2.04%     -1.06%  [kernel.vmlinux]      [k] __fdget_pos
     1.93%     -0.99%  [kernel.vmlinux]      [k] syscall_enter_from_user_mode
               +0.81%  [kernel.vmlinux]      [k] __get_task_ioprio
     1.03%     -0.56%  [kernel.vmlinux]      [k] fpregs_assert_state_consistent
     0.85%     -0.49%  [kernel.vmlinux]      [k] syscall_exit_to_user_mode_prepare
               +0.45%  [kernel.vmlinux]      [k] import_iovec
               +0.20%  [kernel.vmlinux]      [k] kfree
               +0.18%  [kernel.vmlinux]      [k] __x64_sys_readv
               +0.06%  read-zero             [.] readv@plt
               +0.01%  [kernel.vmlinux]      [k] filemap_map_pages
               +0.01%  ld-linux-x86-64.so.2  [.] check_match
     0.00%     +0.00%  [kernel.vmlinux]      [k] memset_erms
     0.00%     -0.00%  [kernel.vmlinux]      [k] perf_iterate_ctx
               +0.00%  [kernel.vmlinux]      [k] xfs_iunlock
     0.49%     -0.00%  read-zero             [.] main
               +0.00%  [kernel.vmlinux]      [k] arch_get_unmapped_area_topdown
               +0.00%  [kernel.vmlinux]      [k] pcpu_alloc
               +0.00%  [kernel.vmlinux]      [k] perf_event_exec
               +0.00%  taskset               [.] 0x0000000000002ebd
     0.00%     +0.00%  [kernel.vmlinux]      [k] perf_ibs_handle_irq
     0.00%     -0.00%  [kernel.vmlinux]      [k] perf_ibs_start
    32.88%             [kernel.vmlinux]      [k] read_zero
    15.22%             libc.so.6             [.] read
     6.27%             [kernel.vmlinux]      [k] vfs_read
     2.60%             [kernel.vmlinux]      [k] ksys_read
     1.02%             [kernel.vmlinux]      [k] __cond_resched
     0.41%             [kernel.vmlinux]      [k] rcu_all_qs
     0.35%             [kernel.vmlinux]      [k] __x64_sys_read
     0.12%             read-zero             [.] read@plt
     0.01%             ld-linux-x86-64.so.2  [.] _dl_load_cache_lookup
     0.01%             ld-linux-x86-64.so.2  [.] _dl_check_map_versions
     0.00%             [kernel.vmlinux]      [k] _find_next_or_bit
     0.00%             [kernel.vmlinux]      [k] perf_event_update_userpage
     0.00%             [kernel.vmlinux]      [k] native_sched_clock

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF
  2023-03-30 17:33   ` Jens Axboe
@ 2023-03-30 21:53     ` Linus Torvalds
  2023-03-30 22:18       ` Jens Axboe
  0 siblings, 1 reply; 18+ messages in thread
From: Linus Torvalds @ 2023-03-30 21:53 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-fsdevel, brauner, viro

[-- Attachment #1: Type: text/plain, Size: 1291 bytes --]

On Thu, Mar 30, 2023 at 10:33 AM Jens Axboe <axboe@kernel.dk> wrote:
>
> That said, there might be things to improve here. But that's a task
> for another time.

So I ended up looking at this, and funnily enough, the *compat*
version of the "copy iovec from user" is actually written to be a lot
more efficient than the "native" version.

The reason is that the compat version has to load the data one field
at a time anyway to do the conversion, so it open-codes the loop. And
it does it all using the efficient "user_access_begin()" etc, so it
generates good code.

In contrast, the native version just does a "copy_from_user()" and
then loops over the result to verify it. And that's actually pretty
horrid. Doing the open-coded loop that fetches and verifies the iov
entries one at a time should be much better.

I dunno. That's my gut feel, at least. And it may explain why your
"readv()" benchmark has "_copy_from_user()" much higher up than the
"read()" case.

Something like the attached *may* help.

Untested - I only checked the generated assembly to see that it seems
to be sane, but I might have done something stupid. I basically copied
the compat code, fixed it up for non-compat types, and then massaged
it a bit more.

                 Linus

[-- Attachment #2: patch.diff --]
[-- Type: text/x-patch, Size: 1567 bytes --]

 lib/iov_iter.c | 35 ++++++++++++++++++++++++++---------
 1 file changed, 26 insertions(+), 9 deletions(-)

diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 274014e4eafe..e793d6ca299c 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -1731,18 +1731,35 @@ static int copy_compat_iovec_from_user(struct iovec *iov,
 }
 
 static int copy_iovec_from_user(struct iovec *iov,
-		const struct iovec __user *uvec, unsigned long nr_segs)
+		const struct iovec __user *uiov, unsigned long nr_segs)
 {
-	unsigned long seg;
+	int ret = -EFAULT;
 
-	if (copy_from_user(iov, uvec, nr_segs * sizeof(*uvec)))
+	if (!user_access_begin(uiov, nr_segs * sizeof(*uiov)))
 		return -EFAULT;
-	for (seg = 0; seg < nr_segs; seg++) {
-		if ((ssize_t)iov[seg].iov_len < 0)
-			return -EINVAL;
-	}
 
-	return 0;
+	do {
+		void __user *buf;
+		ssize_t len;
+
+		unsafe_get_user(len, &uiov->iov_len, uaccess_end);
+		unsafe_get_user(buf, &uiov->iov_base, uaccess_end);
+
+		/* check for size_t not fitting in ssize_t .. */
+		if (unlikely(len < 0)) {
+			ret = -EINVAL;
+			goto uaccess_end;
+		}
+		iov->iov_base = buf;
+		iov->iov_len = len;
+
+		uiov++; iov++;
+	} while (--nr_segs);
+
+	ret = 0;
+uaccess_end:
+	user_access_end();
+	return ret;
 }
 
 struct iovec *iovec_from_user(const struct iovec __user *uvec,
@@ -1767,7 +1784,7 @@ struct iovec *iovec_from_user(const struct iovec __user *uvec,
 			return ERR_PTR(-ENOMEM);
 	}
 
-	if (compat)
+	if (unlikely(compat))
 		ret = copy_compat_iovec_from_user(iov, uvec, nr_segs);
 	else
 		ret = copy_iovec_from_user(iov, uvec, nr_segs);

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF
  2023-03-30 21:53     ` Linus Torvalds
@ 2023-03-30 22:18       ` Jens Axboe
  2023-04-02 22:22         ` Jens Axboe
  0 siblings, 1 reply; 18+ messages in thread
From: Jens Axboe @ 2023-03-30 22:18 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-fsdevel, brauner, viro

On 3/30/23 3:53 PM, Linus Torvalds wrote:
> On Thu, Mar 30, 2023 at 10:33 AM Jens Axboe <axboe@kernel.dk> wrote:
>>
>> That said, there might be things to improve here. But that's a task
>> for another time.
> 
> So I ended up looking at this, and funnily enough, the *compat*
> version of the "copy iovec from user" is actually written to be a lot
> more efficient than the "native" version.
> 
> The reason is that the compat version has to load the data one field
> at a time anyway to do the conversion, so it open-codes the loop. And
> it does it all using the efficient "user_access_begin()" etc, so it
> generates good code.
> 
> In contrast, the native version just does a "copy_from_user()" and
> then loops over the result to verify it. And that's actually pretty
> horrid. Doing the open-coded loop that fetches and verifies the iov
> entries one at a time should be much better.
> 
> I dunno. That's my gut feel, at least. And it may explain why your
> "readv()" benchmark has "_copy_from_user()" much higher up than the
> "read()" case.
> 
> Something like the attached *may* help.
> 
> Untested - I only checked the generated assembly to see that it seems
> to be sane, but I might have done something stupid. I basically copied
> the compat code, fixed it up for non-compat types, and then massaged
> it a bit more.

That's a nice improvement - about 6% better for the single vec case,
And that's the full "benchmark". Here are the numbers in usec for
the read-zero. Lower is better, obviously.

-git
1793883
1809305
1782602
1777280
1803978
1798792
1791190
1802017
1804558
1813370
1807696
1785887
1785506
1789876
1780018
1793932
1803655
1798186

-git+patch
1685393
1685891
1688886
1679967
1687551
1693233
1684883
1688779
1682103
1684944
1686928
1687984
1686729
1687009
1684660
1687295
1684893
1685309

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF
  2023-03-30 22:18       ` Jens Axboe
@ 2023-04-02 22:22         ` Jens Axboe
  2023-04-02 23:37           ` Linus Torvalds
  0 siblings, 1 reply; 18+ messages in thread
From: Jens Axboe @ 2023-04-02 22:22 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-fsdevel, brauner, viro

On 3/30/23 4:18 PM, Jens Axboe wrote:
> On 3/30/23 3:53 PM, Linus Torvalds wrote:
>> On Thu, Mar 30, 2023 at 10:33 AM Jens Axboe <axboe@kernel.dk> wrote:
>>>
>>> That said, there might be things to improve here. But that's a task
>>> for another time.
>>
>> So I ended up looking at this, and funnily enough, the *compat*
>> version of the "copy iovec from user" is actually written to be a lot
>> more efficient than the "native" version.
>>
>> The reason is that the compat version has to load the data one field
>> at a time anyway to do the conversion, so it open-codes the loop. And
>> it does it all using the efficient "user_access_begin()" etc, so it
>> generates good code.
>>
>> In contrast, the native version just does a "copy_from_user()" and
>> then loops over the result to verify it. And that's actually pretty
>> horrid. Doing the open-coded loop that fetches and verifies the iov
>> entries one at a time should be much better.
>>
>> I dunno. That's my gut feel, at least. And it may explain why your
>> "readv()" benchmark has "_copy_from_user()" much higher up than the
>> "read()" case.
>>
>> Something like the attached *may* help.
>>
>> Untested - I only checked the generated assembly to see that it seems
>> to be sane, but I might have done something stupid. I basically copied
>> the compat code, fixed it up for non-compat types, and then massaged
>> it a bit more.
> 
> That's a nice improvement - about 6% better for the single vec case,
> And that's the full "benchmark". Here are the numbers in usec for
> the read-zero. Lower is better, obviously.

Linus, are you going to turn this into a proper patch? This is too
good to not pursue.


-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF
  2023-04-02 22:22         ` Jens Axboe
@ 2023-04-02 23:37           ` Linus Torvalds
  0 siblings, 0 replies; 18+ messages in thread
From: Linus Torvalds @ 2023-04-02 23:37 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-fsdevel, brauner, viro

On Sun, Apr 2, 2023 at 3:22 PM Jens Axboe <axboe@kernel.dk> wrote:
>
> Linus, are you going to turn this into a proper patch? This is too
> good to not pursue.

So I initially was planning on doing just that for 6.4.

However, I looked more at it, and I'm now fairly convinced that the
biggest problem is that we have basically screwed up our simple
"copy_to/from_user()" with all the indirection.

So yes, that patch may end up being a 6% improvement on the made-up
load, but a *large* reason for that is that we just do horribly badly
on a plain "copy_from_user()", and I think I could fix that.

The problems with "copy_from_user()" are:

 - it's *two* function calls ("_copy_to_user()" and then "raw_copy_to_user()")

 - we have entirely lost all "this is how big the copy is" at all levels

and my stupid patch basically improves on copy_iovec_from_user() by
fixing those two issues:

 - it inlines it all by using the "unsafe_get_user()" interfaces

 - it recovers the access sizes by just accessing the fields directly

and in the process it gets rid of us being really really piggy in
"copy_from_user()".

Now, there are a few reason *why* we are so piggy in copy_from_user(), mainly

 (a) CLAC/STAC is just slow and 'access_ok()' is big

 (b) we have lots of debug boiler plate crap. Things like
      - might_fault() (PROVE_LOCKING || DEBUG_ATOMIC_SLEEP)
      - check_copy_size() (HARDENED_USERCOPY)
      - test should_fail_usercopy() (FAULT_INJECTION_USERCOPY)
      - do instrument_copy_from_user_before/after (KASAN)
      - clear the end of the buffer on failures (legacy behavior)

 (c) you probably don't have a CPU with X86_FEATURE_FSRM

Now, what (a) and (b) does is to make it unreasonable to inline
copy_from/to_user(). Particularly when those debug options are set,
but even without them, it's just not reasonable to inline.

Long long ago, we used to special-case small constant-sized user
copies, but all of that has gone away. That's lovely (it used to be
horrid in the header files, and caused problems)

And (c) means that the small cases tend to suck, although with all the
other overhead (two function calls, possibly with return stack
counting, CLAC/STAC, 'lfence' for speculation control etc etc), that's
almost not an issue. It's just extra cycles.

Again, that hack to copy_iovec_from_user() ends up working so well
simply because it avoids all the *stupid* stuff. Yes, it still
obviously does the CLAC/STAC and the access_ok(), but it's ok to
inline when it's just that special code, not some random
'copy_from_user()' that doesn't matter.

Anyway, what this long rant is about is that I'm looking at what I can
do to improve "copy_from_user()" instead. It's a pain, because of all
those debug options, but I actually have a disgusting patch that would
make it possible to have a much better copy_from_user.

How disgusting, you say? Let me quote part of it:

  +#define __a(n,a) ((unsigned long)(n)&((a)-1))
  +#define statically_aligned(n,a) \
  +       (__builtin_constant_p(__a(n,a)) && !__a(n,a))
  +
  +extern unsigned long copy_from_user_a16(void *to, const void __user
*from, unsigned long n);
  +extern unsigned long copy_from_user_a8(void *to, const void __user
*from, unsigned long n);
  +extern unsigned long copy_from_user_a4(void *to, const void __user
*from, unsigned long n);
  ...
  +               if (statically_aligned(n, 16))
  +                       n = copy_from_user_a16(to, from, n);
  +               else if (statically_aligned(n, 8))
  +                       n = copy_from_user_a8(to, from, n);
  +               else if (statically_aligned(n, 4))
  +                       n = copy_from_user_a4(to, from, n);
  +               else
  +                       n = _copy_from_user(to, from, n);

and it turns out that both gcc and clang are smart enough that even
when you don't have a *constant* size, when you have code like

        if (copy_from_user(iov, uvec, nr_segs * sizeof(*uvec)))
                return -EFAULT;

that "statically_aligned()" thing will notice that "look, that size is
a multiple of 16", and my disgusting patch replaces the "I have lost
all sign of the size of the copy" user copy with a call to
copy_from_user_a16().

And interestingly, that "size is 16-byte aligned" happens a *lot*. It
happens for that uvec case, but it also happens for things like
"cp_new_stat()" that copies a "struct stat" to user space, because
'struct stat' is something like 144 bytes (9*16) on x86-64.

So yes, I can improve the iovec copy. Easy peasy. Get rid of the crazy
overhead from a generic interface and from our disgusting debugging
code.

Or I could try to improve copy_to/from_user() in general.

My current patch is too ugly to actually live, and making it even
better (encode the size range when it is statically known, not just a
few "size is X-byte aligned") would make it uglier still.

I'm idly thinking about trying to teach 'objtool' to pick the right
function target by hiding the size information in a separate section.

I may decide that "just doing iovecs" is just so much simpler, and
that trying to improve the generic case is too painful. But that's the
issue right now - I know I *could* just improve copy_from_user()
enough that at least with sane configurations, that iovec improvement
would basically not be worth it.

The problem here is at least partly "sane configurations". At least
Fedora enables HARDENED_USERCOPY. And if you have that enabled, then
"copy_to/from_user()" is _always_ going to suck, and doing a special
"copy iovecs by avoiding it" is always going to be better, if only
because you avoid the debug overhead.

Argh.

             Linus

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2023-04-02 23:37 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-30 16:46 [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF Jens Axboe
2023-03-30 16:46 ` [PATCH 01/11] block: ensure bio_alloc_map_data() deals with ITER_UBUF correctly Jens Axboe
2023-03-30 16:46 ` [PATCH 02/11] iov_iter: add iter_iovec() helper Jens Axboe
2023-03-30 16:46 ` [PATCH 03/11] IB/hfi1: check for user backed iterator, not specific iterator type Jens Axboe
2023-03-30 16:46 ` [PATCH 04/11] IB/qib: " Jens Axboe
2023-03-30 16:46 ` [PATCH 05/11] ALSA: pcm: " Jens Axboe
2023-03-30 16:46 ` [PATCH 06/11] iov_iter: add iter_iov_addr() and iter_iov_len() helpers Jens Axboe
2023-03-30 16:46 ` [PATCH 07/11] iov_iter: remove iov_iter_iovec() Jens Axboe
2023-03-30 16:46 ` [PATCH 08/11] iov_iter: set nr_segs = 1 for ITER_UBUF Jens Axboe
2023-03-30 16:47 ` [PATCH 09/11] iov_iter: overlay struct iovec and ubuf/len Jens Axboe
2023-03-30 16:47 ` [PATCH 10/11] iov_iter: convert import_single_range() to ITER_UBUF Jens Axboe
2023-03-30 16:47 ` [PATCH 11/11] iov_iter: import single vector iovecs as ITER_UBUF Jens Axboe
2023-03-30 17:11 ` [PATCHSET v6b 0/11] Turn single segment imports into ITER_UBUF Linus Torvalds
2023-03-30 17:33   ` Jens Axboe
2023-03-30 21:53     ` Linus Torvalds
2023-03-30 22:18       ` Jens Axboe
2023-04-02 22:22         ` Jens Axboe
2023-04-02 23:37           ` Linus Torvalds

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).