All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] nvmet: disable direct I/O when unavailable
@ 2019-02-21 18:22 ` Johannes Thumshirn
  0 siblings, 0 replies; 20+ messages in thread
From: Johannes Thumshirn @ 2019-02-21 18:22 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Linux Kernel Mailinglist, Linux NVMe Mailinglist, Johannes Thumshirn

Some file-systems, like tmpfs, do not support direct IO, but file-backed
namespaces default to using direct IO. If direct IO is unavailable fall
back to using buffered IO for the file-backed namespace.

This might not ultimately be a solution for production environments but
for test environments it sometimes is feasible to use tmpfs.

Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/nvme/target/io-cmd-file.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
index 517522305e5c..8a861cc0160e 100644
--- a/drivers/nvme/target/io-cmd-file.c
+++ b/drivers/nvme/target/io-cmd-file.c
@@ -38,11 +38,21 @@ int nvmet_file_ns_enable(struct nvmet_ns *ns)
 
 	ns->file = filp_open(ns->device_path, flags, 0);
 	if (IS_ERR(ns->file)) {
+		if (ns->file == ERR_PTR(-EINVAL) && (flags & O_DIRECT)) {
+			flags &= ~O_DIRECT;
+			ns->buffered_io = 0;
+			ns->file = filp_open(ns->device_path, flags, 0);
+			if (!IS_ERR(ns->file)) {
+				pr_info("direct I/O unavailable, falling back to buffered I/O\n");
+				goto getattr;
+			}
+		}
 		pr_err("failed to open file %s: (%ld)\n",
 				ns->device_path, PTR_ERR(ns->file));
 		return PTR_ERR(ns->file);
 	}
 
+getattr:
 	ret = vfs_getattr(&ns->file->f_path,
 			&stat, STATX_SIZE, AT_STATX_FORCE_SYNC);
 	if (ret)
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH] nvmet: disable direct I/O when unavailable
@ 2019-02-21 18:22 ` Johannes Thumshirn
  0 siblings, 0 replies; 20+ messages in thread
From: Johannes Thumshirn @ 2019-02-21 18:22 UTC (permalink / raw)


Some file-systems, like tmpfs, do not support direct IO, but file-backed
namespaces default to using direct IO. If direct IO is unavailable fall
back to using buffered IO for the file-backed namespace.

This might not ultimately be a solution for production environments but
for test environments it sometimes is feasible to use tmpfs.

Signed-off-by: Johannes Thumshirn <jthumshirn at suse.de>
---
 drivers/nvme/target/io-cmd-file.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
index 517522305e5c..8a861cc0160e 100644
--- a/drivers/nvme/target/io-cmd-file.c
+++ b/drivers/nvme/target/io-cmd-file.c
@@ -38,11 +38,21 @@ int nvmet_file_ns_enable(struct nvmet_ns *ns)
 
 	ns->file = filp_open(ns->device_path, flags, 0);
 	if (IS_ERR(ns->file)) {
+		if (ns->file == ERR_PTR(-EINVAL) && (flags & O_DIRECT)) {
+			flags &= ~O_DIRECT;
+			ns->buffered_io = 0;
+			ns->file = filp_open(ns->device_path, flags, 0);
+			if (!IS_ERR(ns->file)) {
+				pr_info("direct I/O unavailable, falling back to buffered I/O\n");
+				goto getattr;
+			}
+		}
 		pr_err("failed to open file %s: (%ld)\n",
 				ns->device_path, PTR_ERR(ns->file));
 		return PTR_ERR(ns->file);
 	}
 
+getattr:
 	ret = vfs_getattr(&ns->file->f_path,
 			&stat, STATX_SIZE, AT_STATX_FORCE_SYNC);
 	if (ret)
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH] nvmet: disable direct I/O when unavailable
  2019-02-21 18:22 ` Johannes Thumshirn
@ 2019-02-22  0:41   ` Chaitanya Kulkarni
  -1 siblings, 0 replies; 20+ messages in thread
From: Chaitanya Kulkarni @ 2019-02-22  0:41 UTC (permalink / raw)
  To: Johannes Thumshirn, Christoph Hellwig
  Cc: Linux Kernel Mailinglist, Linux NVMe Mailinglist

On 02/21/2019 10:23 AM, Johannes Thumshirn wrote:
> Some file-systems, like tmpfs, do not support direct IO, but file-backed
> namespaces default to using direct IO. If direct IO is unavailable fall
> back to using buffered IO for the file-backed namespace.
>
> This might not ultimately be a solution for production environments but
> for test environments it sometimes is feasible to use tmpfs.
>
> Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
> ---
>   drivers/nvme/target/io-cmd-file.c | 10 ++++++++++
>   1 file changed, 10 insertions(+)
>
> diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
> index 517522305e5c..8a861cc0160e 100644
> --- a/drivers/nvme/target/io-cmd-file.c
> +++ b/drivers/nvme/target/io-cmd-file.c
> @@ -38,11 +38,21 @@ int nvmet_file_ns_enable(struct nvmet_ns *ns)
>
>   	ns->file = filp_open(ns->device_path, flags, 0);
>   	if (IS_ERR(ns->file)) {
> +		if (ns->file == ERR_PTR(-EINVAL) && (flags & O_DIRECT)) {
> +			flags &= ~O_DIRECT;
> +			ns->buffered_io = 0;
This needs to be set to 1 when we enable buffered I/O.
> +			ns->file = filp_open(ns->device_path, flags, 0);
> +			if (!IS_ERR(ns->file)) {
> +				pr_info("direct I/O unavailable, falling back to buffered I/O\n");
> +				goto getattr;
> +			}
> +		}
>   		pr_err("failed to open file %s: (%ld)\n",
>   				ns->device_path, PTR_ERR(ns->file));
>   		return PTR_ERR(ns->file);
>   	}
>
> +getattr:
>   	ret = vfs_getattr(&ns->file->f_path,
>   			&stat, STATX_SIZE, AT_STATX_FORCE_SYNC);
>   	if (ret)
> kk

As per specified in the patch, this is only useful for testing, then we 
should modify the test scripts so that on creation of the ctrl we switch 
to the buffered I/O before running fio.

OR

Similar result can be achieved by setting buffered I/O flag
buffered_io=1 before enabling the name-space in the test script.



^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH] nvmet: disable direct I/O when unavailable
@ 2019-02-22  0:41   ` Chaitanya Kulkarni
  0 siblings, 0 replies; 20+ messages in thread
From: Chaitanya Kulkarni @ 2019-02-22  0:41 UTC (permalink / raw)


On 02/21/2019 10:23 AM, Johannes Thumshirn wrote:
> Some file-systems, like tmpfs, do not support direct IO, but file-backed
> namespaces default to using direct IO. If direct IO is unavailable fall
> back to using buffered IO for the file-backed namespace.
>
> This might not ultimately be a solution for production environments but
> for test environments it sometimes is feasible to use tmpfs.
>
> Signed-off-by: Johannes Thumshirn <jthumshirn at suse.de>
> ---
>   drivers/nvme/target/io-cmd-file.c | 10 ++++++++++
>   1 file changed, 10 insertions(+)
>
> diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
> index 517522305e5c..8a861cc0160e 100644
> --- a/drivers/nvme/target/io-cmd-file.c
> +++ b/drivers/nvme/target/io-cmd-file.c
> @@ -38,11 +38,21 @@ int nvmet_file_ns_enable(struct nvmet_ns *ns)
>
>   	ns->file = filp_open(ns->device_path, flags, 0);
>   	if (IS_ERR(ns->file)) {
> +		if (ns->file == ERR_PTR(-EINVAL) && (flags & O_DIRECT)) {
> +			flags &= ~O_DIRECT;
> +			ns->buffered_io = 0;
This needs to be set to 1 when we enable buffered I/O.
> +			ns->file = filp_open(ns->device_path, flags, 0);
> +			if (!IS_ERR(ns->file)) {
> +				pr_info("direct I/O unavailable, falling back to buffered I/O\n");
> +				goto getattr;
> +			}
> +		}
>   		pr_err("failed to open file %s: (%ld)\n",
>   				ns->device_path, PTR_ERR(ns->file));
>   		return PTR_ERR(ns->file);
>   	}
>
> +getattr:
>   	ret = vfs_getattr(&ns->file->f_path,
>   			&stat, STATX_SIZE, AT_STATX_FORCE_SYNC);
>   	if (ret)
> kk

As per specified in the patch, this is only useful for testing, then we 
should modify the test scripts so that on creation of the ctrl we switch 
to the buffered I/O before running fio.

OR

Similar result can be achieved by setting buffered I/O flag
buffered_io=1 before enabling the name-space in the test script.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] nvmet: disable direct I/O when unavailable
  2019-02-22  0:41   ` Chaitanya Kulkarni
@ 2019-02-22  5:55     ` Johannes Thumshirn
  -1 siblings, 0 replies; 20+ messages in thread
From: Johannes Thumshirn @ 2019-02-22  5:55 UTC (permalink / raw)
  To: Chaitanya Kulkarni, Christoph Hellwig
  Cc: Linux Kernel Mailinglist, Linux NVMe Mailinglist

On 22/02/2019 01:41, Chaitanya Kulkarni wrote:
[...]
> 
> As per specified in the patch, this is only useful for testing, then we 
> should modify the test scripts so that on creation of the ctrl we switch 
> to the buffered I/O before running fio.

Or on any other file-system that does not support DIO..

> 
> OR
> 
> Similar result can be achieved by setting buffered I/O flag
> buffered_io=1 before enabling the name-space in the test script.

Frankly, we have a ton of testing related special cases in the kernel.

This one is a) simple and small, only 10 LoC, b) far away from the fast
path or any other place where it could have any impact on legitimate
users and c) it prints an informal message showing you what happened.

Sorry but this is a https://xkcd.com/386/ moment.

Byte,
	Johannes
-- 
Johannes Thumshirn                            SUSE Labs Filesystems
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH] nvmet: disable direct I/O when unavailable
@ 2019-02-22  5:55     ` Johannes Thumshirn
  0 siblings, 0 replies; 20+ messages in thread
From: Johannes Thumshirn @ 2019-02-22  5:55 UTC (permalink / raw)


On 22/02/2019 01:41, Chaitanya Kulkarni wrote:
[...]
> 
> As per specified in the patch, this is only useful for testing, then we 
> should modify the test scripts so that on creation of the ctrl we switch 
> to the buffered I/O before running fio.

Or on any other file-system that does not support DIO..

> 
> OR
> 
> Similar result can be achieved by setting buffered I/O flag
> buffered_io=1 before enabling the name-space in the test script.

Frankly, we have a ton of testing related special cases in the kernel.

This one is a) simple and small, only 10 LoC, b) far away from the fast
path or any other place where it could have any impact on legitimate
users and c) it prints an informal message showing you what happened.

Sorry but this is a https://xkcd.com/386/ moment.

Byte,
	Johannes
-- 
Johannes Thumshirn                            SUSE Labs Filesystems
jthumshirn at suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N?rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] nvmet: disable direct I/O when unavailable
  2019-02-22  5:55     ` Johannes Thumshirn
@ 2019-02-24 10:54       ` Max Gurtovoy
  -1 siblings, 0 replies; 20+ messages in thread
From: Max Gurtovoy @ 2019-02-24 10:54 UTC (permalink / raw)
  To: Johannes Thumshirn, Chaitanya Kulkarni, Christoph Hellwig
  Cc: Linux Kernel Mailinglist, Linux NVMe Mailinglist


On 2/22/2019 7:55 AM, Johannes Thumshirn wrote:
> On 22/02/2019 01:41, Chaitanya Kulkarni wrote:
> [...]
>> As per specified in the patch, this is only useful for testing, then we
>> should modify the test scripts so that on creation of the ctrl we switch
>> to the buffered I/O before running fio.
> Or on any other file-system that does not support DIO..

Do we really want to support these kind of filesystems for fabrics 
backend device ? only for testing ?

What is the status of iSCSI/SRP targets in this case ?


>
>> OR
>>
>> Similar result can be achieved by setting buffered I/O flag
>> buffered_io=1 before enabling the name-space in the test script.
> Frankly, we have a ton of testing related special cases in the kernel.
>
> This one is a) simple and small, only 10 LoC, b) far away from the fast
> path or any other place where it could have any impact on legitimate
> users and c) it prints an informal message showing you what happened.
>
> Sorry but this is a https://xkcd.com/386/ moment.
>
> Byte,
> 	Johannes

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH] nvmet: disable direct I/O when unavailable
@ 2019-02-24 10:54       ` Max Gurtovoy
  0 siblings, 0 replies; 20+ messages in thread
From: Max Gurtovoy @ 2019-02-24 10:54 UTC (permalink / raw)



On 2/22/2019 7:55 AM, Johannes Thumshirn wrote:
> On 22/02/2019 01:41, Chaitanya Kulkarni wrote:
> [...]
>> As per specified in the patch, this is only useful for testing, then we
>> should modify the test scripts so that on creation of the ctrl we switch
>> to the buffered I/O before running fio.
> Or on any other file-system that does not support DIO..

Do we really want to support these kind of filesystems for fabrics 
backend device ? only for testing ?

What is the status of iSCSI/SRP targets in this case ?


>
>> OR
>>
>> Similar result can be achieved by setting buffered I/O flag
>> buffered_io=1 before enabling the name-space in the test script.
> Frankly, we have a ton of testing related special cases in the kernel.
>
> This one is a) simple and small, only 10 LoC, b) far away from the fast
> path or any other place where it could have any impact on legitimate
> users and c) it prints an informal message showing you what happened.
>
> Sorry but this is a https://xkcd.com/386/ moment.
>
> Byte,
> 	Johannes

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] nvmet: disable direct I/O when unavailable
  2019-02-24 10:54       ` Max Gurtovoy
@ 2019-02-25  9:37         ` Johannes Thumshirn
  -1 siblings, 0 replies; 20+ messages in thread
From: Johannes Thumshirn @ 2019-02-25  9:37 UTC (permalink / raw)
  To: Max Gurtovoy, Chaitanya Kulkarni, Christoph Hellwig
  Cc: Linux Kernel Mailinglist, Linux NVMe Mailinglist

On 24/02/2019 11:54, Max Gurtovoy wrote:
> 
> On 2/22/2019 7:55 AM, Johannes Thumshirn wrote:
>> On 22/02/2019 01:41, Chaitanya Kulkarni wrote:
>> [...]
>>> As per specified in the patch, this is only useful for testing, then we
>>> should modify the test scripts so that on creation of the ctrl we switch
>>> to the buffered I/O before running fio.
>> Or on any other file-system that does not support DIO..
> 
> Do we really want to support these kind of filesystems for fabrics
> backend device ? only for testing ?
> 
> What is the status of iSCSI/SRP targets in this case ?

iSCSI/SRP passes in the following:

 /*
  * Use O_DSYNC by default instead of O_SYNC to forgo syncing
  * of pure timestamp updates.
  */
 flags = O_RDWR | O_CREAT | O_LARGEFILE | O_DSYNC;

Meaning it uses O_DSYNC, so for instance if I do a dd to tmpfs with O_DSYNC:
$ dd if=/dev/zero of=/dev/shm/foo oflag=dsync bs=4k count=1
1+0 records in
1+0 records out
4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175094 s, 23.4 MB/s

Same with O_DIRECT:
$ dd if=/dev/zero of=/dev/shm/foo oflag=direct bs=4k count=1
dd: failed to open '/dev/shm/foo': Invalid argument


-- 
Johannes Thumshirn                            SUSE Labs Filesystems
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH] nvmet: disable direct I/O when unavailable
@ 2019-02-25  9:37         ` Johannes Thumshirn
  0 siblings, 0 replies; 20+ messages in thread
From: Johannes Thumshirn @ 2019-02-25  9:37 UTC (permalink / raw)


On 24/02/2019 11:54, Max Gurtovoy wrote:
> 
> On 2/22/2019 7:55 AM, Johannes Thumshirn wrote:
>> On 22/02/2019 01:41, Chaitanya Kulkarni wrote:
>> [...]
>>> As per specified in the patch, this is only useful for testing, then we
>>> should modify the test scripts so that on creation of the ctrl we switch
>>> to the buffered I/O before running fio.
>> Or on any other file-system that does not support DIO..
> 
> Do we really want to support these kind of filesystems for fabrics
> backend device ? only for testing ?
> 
> What is the status of iSCSI/SRP targets in this case ?

iSCSI/SRP passes in the following:

 /*
  * Use O_DSYNC by default instead of O_SYNC to forgo syncing
  * of pure timestamp updates.
  */
 flags = O_RDWR | O_CREAT | O_LARGEFILE | O_DSYNC;

Meaning it uses O_DSYNC, so for instance if I do a dd to tmpfs with O_DSYNC:
$ dd if=/dev/zero of=/dev/shm/foo oflag=dsync bs=4k count=1
1+0 records in
1+0 records out
4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175094 s, 23.4 MB/s

Same with O_DIRECT:
$ dd if=/dev/zero of=/dev/shm/foo oflag=direct bs=4k count=1
dd: failed to open '/dev/shm/foo': Invalid argument


-- 
Johannes Thumshirn                            SUSE Labs Filesystems
jthumshirn at suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N?rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] nvmet: disable direct I/O when unavailable
  2019-02-25  9:37         ` Johannes Thumshirn
@ 2019-02-25 14:49           ` Bart Van Assche
  -1 siblings, 0 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-02-25 14:49 UTC (permalink / raw)
  To: Johannes Thumshirn, Max Gurtovoy, Chaitanya Kulkarni, Christoph Hellwig
  Cc: Linux Kernel Mailinglist, Linux NVMe Mailinglist

On 2/25/19 1:37 AM, Johannes Thumshirn wrote:
> On 24/02/2019 11:54, Max Gurtovoy wrote:
>> What is the status of iSCSI/SRP targets in this case ?
> 
> iSCSI/SRP passes in the following:
> 
>   /*
>    * Use O_DSYNC by default instead of O_SYNC to forgo syncing
>    * of pure timestamp updates.
>    */
>   flags = O_RDWR | O_CREAT | O_LARGEFILE | O_DSYNC;

That code fragment comes from the LIO file backend. There is no 
requirement for a SCSI target core file backend to use O_DSYNC. SCST 
allows users to choose whether or not O_DSYNC should be used:

	if (virt_dev->wt_flag && !virt_dev->nv_cache)
		open_flags |= O_DSYNC;

Bart.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH] nvmet: disable direct I/O when unavailable
@ 2019-02-25 14:49           ` Bart Van Assche
  0 siblings, 0 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-02-25 14:49 UTC (permalink / raw)


On 2/25/19 1:37 AM, Johannes Thumshirn wrote:
> On 24/02/2019 11:54, Max Gurtovoy wrote:
>> What is the status of iSCSI/SRP targets in this case ?
> 
> iSCSI/SRP passes in the following:
> 
>   /*
>    * Use O_DSYNC by default instead of O_SYNC to forgo syncing
>    * of pure timestamp updates.
>    */
>   flags = O_RDWR | O_CREAT | O_LARGEFILE | O_DSYNC;

That code fragment comes from the LIO file backend. There is no 
requirement for a SCSI target core file backend to use O_DSYNC. SCST 
allows users to choose whether or not O_DSYNC should be used:

	if (virt_dev->wt_flag && !virt_dev->nv_cache)
		open_flags |= O_DSYNC;

Bart.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] nvmet: disable direct I/O when unavailable
  2019-02-25 14:49           ` Bart Van Assche
@ 2019-02-25 15:38             ` Johannes Thumshirn
  -1 siblings, 0 replies; 20+ messages in thread
From: Johannes Thumshirn @ 2019-02-25 15:38 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Max Gurtovoy, Chaitanya Kulkarni, Christoph Hellwig,
	Linux Kernel Mailinglist, Linux NVMe Mailinglist

On Mon, Feb 25, 2019 at 06:49:04AM -0800, Bart Van Assche wrote:
> On 2/25/19 1:37 AM, Johannes Thumshirn wrote:
> > On 24/02/2019 11:54, Max Gurtovoy wrote:
> > > What is the status of iSCSI/SRP targets in this case ?
> > 
> > iSCSI/SRP passes in the following:
> > 
> >   /*
> >    * Use O_DSYNC by default instead of O_SYNC to forgo syncing
> >    * of pure timestamp updates.
> >    */
> >   flags = O_RDWR | O_CREAT | O_LARGEFILE | O_DSYNC;
> 
> That code fragment comes from the LIO file backend. There is no requirement
> for a SCSI target core file backend to use O_DSYNC. SCST allows users to
> choose whether or not O_DSYNC should be used:

Yes, LIO file backend is the in-tree equivalent to NVMe's file backend. That's
why I copied it here.


> 
> 	if (virt_dev->wt_flag && !virt_dev->nv_cache)
> 		open_flags |= O_DSYNC;
> 

OK. Do your open_flags include O_DIRECT per default as well?

Byte,
	Johannes
-- 
Johannes Thumshirn                            SUSE Labs Filesystems
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH] nvmet: disable direct I/O when unavailable
@ 2019-02-25 15:38             ` Johannes Thumshirn
  0 siblings, 0 replies; 20+ messages in thread
From: Johannes Thumshirn @ 2019-02-25 15:38 UTC (permalink / raw)


On Mon, Feb 25, 2019@06:49:04AM -0800, Bart Van Assche wrote:
> On 2/25/19 1:37 AM, Johannes Thumshirn wrote:
> > On 24/02/2019 11:54, Max Gurtovoy wrote:
> > > What is the status of iSCSI/SRP targets in this case ?
> > 
> > iSCSI/SRP passes in the following:
> > 
> >   /*
> >    * Use O_DSYNC by default instead of O_SYNC to forgo syncing
> >    * of pure timestamp updates.
> >    */
> >   flags = O_RDWR | O_CREAT | O_LARGEFILE | O_DSYNC;
> 
> That code fragment comes from the LIO file backend. There is no requirement
> for a SCSI target core file backend to use O_DSYNC. SCST allows users to
> choose whether or not O_DSYNC should be used:

Yes, LIO file backend is the in-tree equivalent to NVMe's file backend. That's
why I copied it here.


> 
> 	if (virt_dev->wt_flag && !virt_dev->nv_cache)
> 		open_flags |= O_DSYNC;
> 

OK. Do your open_flags include O_DIRECT per default as well?

Byte,
	Johannes
-- 
Johannes Thumshirn                            SUSE Labs Filesystems
jthumshirn at suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N?rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] nvmet: disable direct I/O when unavailable
  2019-02-25 15:38             ` Johannes Thumshirn
@ 2019-02-25 16:10               ` Bart Van Assche
  -1 siblings, 0 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-02-25 16:10 UTC (permalink / raw)
  To: Johannes Thumshirn
  Cc: Max Gurtovoy, Chaitanya Kulkarni, Christoph Hellwig,
	Linux Kernel Mailinglist, Linux NVMe Mailinglist

On Mon, 2019-02-25 at 16:38 +0100, Johannes Thumshirn wrote:
> On Mon, Feb 25, 2019 at 06:49:04AM -0800, Bart Van Assche wrote:
> > On 2/25/19 1:37 AM, Johannes Thumshirn wrote:
> > > On 24/02/2019 11:54, Max Gurtovoy wrote:
> > > > What is the status of iSCSI/SRP targets in this case ?
> > > 
> > > iSCSI/SRP passes in the following:
> > > 
> > >   /*
> > >    * Use O_DSYNC by default instead of O_SYNC to forgo syncing
> > >    * of pure timestamp updates.
> > >    */
> > >   flags = O_RDWR | O_CREAT | O_LARGEFILE | O_DSYNC;
> > 
> > That code fragment comes from the LIO file backend. There is no requirement
> > for a SCSI target core file backend to use O_DSYNC. SCST allows users to
> > choose whether or not O_DSYNC should be used:
> 
> Yes, LIO file backend is the in-tree equivalent to NVMe's file backend. That's
> why I copied it here.
> 
> 
> > 
> > 	if (virt_dev->wt_flag && !virt_dev->nv_cache)
> > 		open_flags |= O_DSYNC;
> > 
> 
> OK. Do your open_flags include O_DIRECT per default as well?

I don't think it has ever been supported to pass O_DIRECT to filp_open().
As one can see in build_open_flags() O_DIRECT is ignored. The only way I
know of to submit direct I/O from kernel context is by setting the
IOCB_DIRECT flag.

Bart.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH] nvmet: disable direct I/O when unavailable
@ 2019-02-25 16:10               ` Bart Van Assche
  0 siblings, 0 replies; 20+ messages in thread
From: Bart Van Assche @ 2019-02-25 16:10 UTC (permalink / raw)


On Mon, 2019-02-25@16:38 +0100, Johannes Thumshirn wrote:
> On Mon, Feb 25, 2019@06:49:04AM -0800, Bart Van Assche wrote:
> > On 2/25/19 1:37 AM, Johannes Thumshirn wrote:
> > > On 24/02/2019 11:54, Max Gurtovoy wrote:
> > > > What is the status of iSCSI/SRP targets in this case ?
> > > 
> > > iSCSI/SRP passes in the following:
> > > 
> > >   /*
> > >    * Use O_DSYNC by default instead of O_SYNC to forgo syncing
> > >    * of pure timestamp updates.
> > >    */
> > >   flags = O_RDWR | O_CREAT | O_LARGEFILE | O_DSYNC;
> > 
> > That code fragment comes from the LIO file backend. There is no requirement
> > for a SCSI target core file backend to use O_DSYNC. SCST allows users to
> > choose whether or not O_DSYNC should be used:
> 
> Yes, LIO file backend is the in-tree equivalent to NVMe's file backend. That's
> why I copied it here.
> 
> 
> > 
> > 	if (virt_dev->wt_flag && !virt_dev->nv_cache)
> > 		open_flags |= O_DSYNC;
> > 
> 
> OK. Do your open_flags include O_DIRECT per default as well?

I don't think it has ever been supported to pass O_DIRECT to filp_open().
As one can see in build_open_flags() O_DIRECT is ignored. The only way I
know of to submit direct I/O from kernel context is by setting the
IOCB_DIRECT flag.

Bart.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] nvmet: disable direct I/O when unavailable
  2019-02-22  5:55     ` Johannes Thumshirn
@ 2019-02-25 21:18       ` Sagi Grimberg
  -1 siblings, 0 replies; 20+ messages in thread
From: Sagi Grimberg @ 2019-02-25 21:18 UTC (permalink / raw)
  To: Johannes Thumshirn, Chaitanya Kulkarni, Christoph Hellwig
  Cc: Linux Kernel Mailinglist, Linux NVMe Mailinglist


> Frankly, we have a ton of testing related special cases in the kernel.
> 
> This one is a) simple and small, only 10 LoC, b) far away from the fast
> path or any other place where it could have any impact on legitimate
> users and c) it prints an informal message showing you what happened.
> 
> Sorry but this is a https://xkcd.com/386/ moment.

How about we just fail it with a proper error message and let the
user set buffered_io instead of trying to get smart here...

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH] nvmet: disable direct I/O when unavailable
@ 2019-02-25 21:18       ` Sagi Grimberg
  0 siblings, 0 replies; 20+ messages in thread
From: Sagi Grimberg @ 2019-02-25 21:18 UTC (permalink / raw)



> Frankly, we have a ton of testing related special cases in the kernel.
> 
> This one is a) simple and small, only 10 LoC, b) far away from the fast
> path or any other place where it could have any impact on legitimate
> users and c) it prints an informal message showing you what happened.
> 
> Sorry but this is a https://xkcd.com/386/ moment.

How about we just fail it with a proper error message and let the
user set buffered_io instead of trying to get smart here...

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] nvmet: disable direct I/O when unavailable
  2019-02-25 21:18       ` Sagi Grimberg
@ 2019-03-08 13:26         ` Christoph Hellwig
  -1 siblings, 0 replies; 20+ messages in thread
From: Christoph Hellwig @ 2019-03-08 13:26 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Johannes Thumshirn, Chaitanya Kulkarni, Christoph Hellwig,
	Linux Kernel Mailinglist, Linux NVMe Mailinglist

On Mon, Feb 25, 2019 at 01:18:10PM -0800, Sagi Grimberg wrote:
>
>> Frankly, we have a ton of testing related special cases in the kernel.
>>
>> This one is a) simple and small, only 10 LoC, b) far away from the fast
>> path or any other place where it could have any impact on legitimate
>> users and c) it prints an informal message showing you what happened.
>>
>> Sorry but this is a https://xkcd.com/386/ moment.
>
> How about we just fail it with a proper error message and let the
> user set buffered_io instead of trying to get smart here...

Yes, would be my preference.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH] nvmet: disable direct I/O when unavailable
@ 2019-03-08 13:26         ` Christoph Hellwig
  0 siblings, 0 replies; 20+ messages in thread
From: Christoph Hellwig @ 2019-03-08 13:26 UTC (permalink / raw)


On Mon, Feb 25, 2019@01:18:10PM -0800, Sagi Grimberg wrote:
>
>> Frankly, we have a ton of testing related special cases in the kernel.
>>
>> This one is a) simple and small, only 10 LoC, b) far away from the fast
>> path or any other place where it could have any impact on legitimate
>> users and c) it prints an informal message showing you what happened.
>>
>> Sorry but this is a https://xkcd.com/386/ moment.
>
> How about we just fail it with a proper error message and let the
> user set buffered_io instead of trying to get smart here...

Yes, would be my preference.

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2019-03-08 13:26 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-21 18:22 [PATCH] nvmet: disable direct I/O when unavailable Johannes Thumshirn
2019-02-21 18:22 ` Johannes Thumshirn
2019-02-22  0:41 ` Chaitanya Kulkarni
2019-02-22  0:41   ` Chaitanya Kulkarni
2019-02-22  5:55   ` Johannes Thumshirn
2019-02-22  5:55     ` Johannes Thumshirn
2019-02-24 10:54     ` Max Gurtovoy
2019-02-24 10:54       ` Max Gurtovoy
2019-02-25  9:37       ` Johannes Thumshirn
2019-02-25  9:37         ` Johannes Thumshirn
2019-02-25 14:49         ` Bart Van Assche
2019-02-25 14:49           ` Bart Van Assche
2019-02-25 15:38           ` Johannes Thumshirn
2019-02-25 15:38             ` Johannes Thumshirn
2019-02-25 16:10             ` Bart Van Assche
2019-02-25 16:10               ` Bart Van Assche
2019-02-25 21:18     ` Sagi Grimberg
2019-02-25 21:18       ` Sagi Grimberg
2019-03-08 13:26       ` Christoph Hellwig
2019-03-08 13:26         ` Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.