linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 5/9 next] scsi: Use iovec_import() instead of import_iovec().
@ 2020-09-15 14:55 David Laight
  2020-09-21 14:22 ` Christoph Hellwig
  0 siblings, 1 reply; 4+ messages in thread
From: David Laight @ 2020-09-15 14:55 UTC (permalink / raw)
  To: linux-kernel, netdev, io-uring, Jens Axboe, David S. Miller,
	Al Viro, linux-fsdevel


iovec_import() has a safer calling convention than import_iovec().

Signed-off-by: David Laight <david.laight@aculab.com>
---
 block/scsi_ioctl.c | 14 ++++++++------
 drivers/scsi/sg.c  | 14 +++++++-------
 2 files changed, 15 insertions(+), 13 deletions(-)

diff --git a/block/scsi_ioctl.c b/block/scsi_ioctl.c
index ef722f04f88a..0343918a84d3 100644
--- a/block/scsi_ioctl.c
+++ b/block/scsi_ioctl.c
@@ -331,20 +331,22 @@ static int sg_io(struct request_queue *q, struct gendisk *bd_disk,
 	ret = 0;
 	if (hdr->iovec_count) {
 		struct iov_iter i;
-		struct iovec *iov = NULL;
+		struct iovec *iov;
 
 #ifdef CONFIG_COMPAT
 		if (in_compat_syscall())
-			ret = compat_import_iovec(rq_data_dir(rq),
+			iov = compat_iovec_import(rq_data_dir(rq),
 				   hdr->dxferp, hdr->iovec_count,
-				   0, &iov, &i);
+				   NULL, &i);
 		else
 #endif
-			ret = import_iovec(rq_data_dir(rq),
+			iov = iovec_import(rq_data_dir(rq),
 				   hdr->dxferp, hdr->iovec_count,
-				   0, &iov, &i);
-		if (ret < 0)
+				   NULL, &i);
+		if (IS_ERR(iov)) {
+			ret = PTR_ERR(iov);
 			goto out_free_cdb;
+		}
 
 		/* SG_IO howto says that the shorter of the two wins */
 		iov_iter_truncate(&i, hdr->dxfer_len);
diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
index 20472aaaf630..1dbc0a74add5 100644
--- a/drivers/scsi/sg.c
+++ b/drivers/scsi/sg.c
@@ -1817,19 +1817,19 @@ sg_start_req(Sg_request *srp, unsigned char *cmd)
 	}
 
 	if (iov_count) {
-		struct iovec *iov = NULL;
+		struct iovec *iov;
 		struct iov_iter i;
 
 #ifdef CONFIG_COMPAT
 		if (in_compat_syscall())
-			res = compat_import_iovec(rw, hp->dxferp, iov_count,
-						  0, &iov, &i);
+			iov = compat_iovec_import(rw, hp->dxferp, iov_count,
+						  NULL, &i);
 		else
 #endif
-			res = import_iovec(rw, hp->dxferp, iov_count,
-					   0, &iov, &i);
-		if (res < 0)
-			return res;
+			iov = iovec_import(rw, hp->dxferp, iov_count,
+					   NULL, &i);
+		if (IS_ERR(iov))
+			return PTR_ERR(iov);
 
 		iov_iter_truncate(&i, hp->dxfer_len);
 		if (!iov_iter_count(&i)) {
-- 
2.25.1

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH 5/9 next] scsi: Use iovec_import() instead of import_iovec().
  2020-09-15 14:55 [PATCH 5/9 next] scsi: Use iovec_import() instead of import_iovec() David Laight
@ 2020-09-21 14:22 ` Christoph Hellwig
  2020-09-21 14:50   ` David Laight
  2021-01-08 11:13   ` David Laight
  0 siblings, 2 replies; 4+ messages in thread
From: Christoph Hellwig @ 2020-09-21 14:22 UTC (permalink / raw)
  To: David Laight
  Cc: linux-kernel, netdev, io-uring, Jens Axboe, David S. Miller,
	Al Viro, linux-fsdevel

So looking at the various callers I'm not sure this API is the
best.  If we want to do something fancy I'd hide the struct iovec
instances entirely with something like:

struct iov_storage {
	struct iovec stack[UIO_FASTIOV], *vec;
}

int iov_iter_import_iovec(struct iov_iter *iter, struct iov_storage *s,
		const struct iovec __user *vec, unsigned long nr_segs,
		int type);

and then add a new helper to free the thing if needed:

void iov_iter_release_iovec(struct iov_storage *s)
{
	if (s->vec != s->stack)
		kfree(s->vec);
}

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: [PATCH 5/9 next] scsi: Use iovec_import() instead of import_iovec().
  2020-09-21 14:22 ` Christoph Hellwig
@ 2020-09-21 14:50   ` David Laight
  2021-01-08 11:13   ` David Laight
  1 sibling, 0 replies; 4+ messages in thread
From: David Laight @ 2020-09-21 14:50 UTC (permalink / raw)
  To: 'Christoph Hellwig'
  Cc: linux-kernel, netdev, io-uring, Jens Axboe, David S. Miller,
	Al Viro, linux-fsdevel

From: Christoph Hellwig
> Sent: 21 September 2020 15:22
> 
> So looking at the various callers I'm not sure this API is the
> best.  If we want to do something fancy I'd hide the struct iovec
> instances entirely with something like:
> 
> struct iov_storage {
> 	struct iovec stack[UIO_FASTIOV], *vec;
> }
> 
> int iov_iter_import_iovec(struct iov_iter *iter, struct iov_storage *s,
> 		const struct iovec __user *vec, unsigned long nr_segs,
> 		int type);
> 
> and then add a new helper to free the thing if needed:
> 
> void iov_iter_release_iovec(struct iov_storage *s)
> {
> 	if (s->vec != s->stack)
> 		kfree(s->vec);
> }

I didn't think of going that far.
There are 2 call sites (in scsi) that don't pass the cache.

Given that the 'buffer to free' address probably needs to
be spilled to stack forcing in into an on-stack structure
that is already passed by address is probably a good idea.

The iov_iter_release_iovec() should be static inline and just:
	if (s->vec)
		kfree(s->vec);
You want the test because 99.99% of the time it will be NULL.
The kernel iov[] to use is iter.iov not part of the cache.

That will be a bigger change on the io_uring code.
(The patch I didn't write.)

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: [PATCH 5/9 next] scsi: Use iovec_import() instead of import_iovec().
  2020-09-21 14:22 ` Christoph Hellwig
  2020-09-21 14:50   ` David Laight
@ 2021-01-08 11:13   ` David Laight
  1 sibling, 0 replies; 4+ messages in thread
From: David Laight @ 2021-01-08 11:13 UTC (permalink / raw)
  To: 'Christoph Hellwig'
  Cc: linux-kernel, netdev, io-uring, Jens Axboe, David S. Miller,
	Al Viro, linux-fsdevel

From: Christoph Hellwig
> Sent: 21 September 2020 15:22
> 
> So looking at the various callers I'm not sure this API is the
> best.  If we want to do something fancy I'd hide the struct iovec
> instances entirely with something like:
> 
> struct iov_storage {
> 	struct iovec stack[UIO_FASTIOV], *vec;
> }
> 
> int iov_iter_import_iovec(struct iov_iter *iter, struct iov_storage *s,
> 		const struct iovec __user *vec, unsigned long nr_segs,
> 		int type);
> 
> and then add a new helper to free the thing if needed:
> 
> void iov_iter_release_iovec(struct iov_storage *s)
> {
> 	if (s->vec != s->stack)
> 		kfree(s->vec);
> }

I've been looking at this code again now most of the pending changes
are in Linus's tree.

I was actually looking at going one stage further.
The 'iov_iter' is always allocated with the 'iov_storage' *above).
Usually both are on the callers stack - possibly in different functions.

So add:
struct iovec_iter {
	struct iov_iter iter;
	struct iovec to_free;
	struct iovec stack[UIO_FASTIOV];
};

int __iovec_import(struct iovec_iter *, const struct iovec __user *vec,
	unsigned long nr_segs, int type, bool compat);

And a 'clean' function to do kfree(iovec->to_free);

This reduces the complexity of most of the callers.

I started doing the changes, but got in a mess in io_uring.c (as usual).
I think I've got a patch pending (in my brain) to simplify the io_uring code.

The plan is to add:
	if (iter->iov != xxx->to_free)
		iter->iov = xxx->stack;
Prior to every use of the iter.
This fixes up anything that got broken by a memcpy() of the fields.
The tidyup code is then always kfree(xxx->to_free).

	David

	

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-01-08 11:16 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-15 14:55 [PATCH 5/9 next] scsi: Use iovec_import() instead of import_iovec() David Laight
2020-09-21 14:22 ` Christoph Hellwig
2020-09-21 14:50   ` David Laight
2021-01-08 11:13   ` David Laight

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).