All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] NVMe: Increase shutdown complete time
@ 2014-03-31 16:24 Keith Busch
  2014-03-31 18:56 ` Dan McLeran
  2014-04-04 15:22 ` Laura Jessen-SSI
  0 siblings, 2 replies; 12+ messages in thread
From: Keith Busch @ 2014-03-31 16:24 UTC (permalink / raw)


The spec doesn't have a recommendation for shutdown beyond "that the host
wait a minimum of one second for the shutdown operations to complete",
so we need to choose an arbitrarily value so we don't wait forever but
high enough to prevent unsafe shutdowns. Some h/w vendors say the previous
two seconds is not long enough at some capacities. Twenty seconds ought
to be enough for anybody, right?

Signed-off-by: Keith Busch <keith.busch at intel.com>
---
 drivers/block/nvme-core.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
index 625259d..103da93 100644
--- a/drivers/block/nvme-core.c
+++ b/drivers/block/nvme-core.c
@@ -1352,7 +1352,7 @@ static int nvme_shutdown_ctrl(struct nvme_dev *dev)
 	cc = (readl(&dev->bar->cc) & ~NVME_CC_SHN_MASK) | NVME_CC_SHN_NORMAL;
 	writel(cc, &dev->bar->cc);
 
-	timeout = 2 * HZ + jiffies;
+	timeout = 20 * HZ + jiffies;
 	while ((readl(&dev->bar->csts) & NVME_CSTS_SHST_MASK) !=
 							NVME_CSTS_SHST_CMPLT) {
 		msleep(100);
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH] NVMe: Increase shutdown complete time
  2014-03-31 16:24 [PATCH] NVMe: Increase shutdown complete time Keith Busch
@ 2014-03-31 18:56 ` Dan McLeran
  2014-03-31 20:37   ` Robles, Raymond C
  2014-04-04 15:22 ` Laura Jessen-SSI
  1 sibling, 1 reply; 12+ messages in thread
From: Dan McLeran @ 2014-03-31 18:56 UTC (permalink / raw)


Seems reasonable to me.

On Mon, 31 Mar 2014, Keith Busch wrote:

> The spec doesn't have a recommendation for shutdown beyond "that the host
> wait a minimum of one second for the shutdown operations to complete",
> so we need to choose an arbitrarily value so we don't wait forever but
> high enough to prevent unsafe shutdowns. Some h/w vendors say the previous
> two seconds is not long enough at some capacities. Twenty seconds ought
> to be enough for anybody, right?
>
> Signed-off-by: Keith Busch <keith.busch at intel.com>
> ---
> drivers/block/nvme-core.c |    2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
> index 625259d..103da93 100644
> --- a/drivers/block/nvme-core.c
> +++ b/drivers/block/nvme-core.c
> @@ -1352,7 +1352,7 @@ static int nvme_shutdown_ctrl(struct nvme_dev *dev)
> 	cc = (readl(&dev->bar->cc) & ~NVME_CC_SHN_MASK) | NVME_CC_SHN_NORMAL;
> 	writel(cc, &dev->bar->cc);
>
> -	timeout = 2 * HZ + jiffies;
> +	timeout = 20 * HZ + jiffies;
> 	while ((readl(&dev->bar->csts) & NVME_CSTS_SHST_MASK) !=
> 							NVME_CSTS_SHST_CMPLT) {
> 		msleep(100);
> -- 
> 1.7.10.4
>
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://merlin.infradead.org/mailman/listinfo/linux-nvme
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] NVMe: Increase shutdown complete time
  2014-03-31 18:56 ` Dan McLeran
@ 2014-03-31 20:37   ` Robles, Raymond C
  2014-03-31 23:57     ` Dan McLeran
  0 siblings, 1 reply; 12+ messages in thread
From: Robles, Raymond C @ 2014-03-31 20:37 UTC (permalink / raw)


You're probably fine waiting anywhere up to CAP.TO.

-----Original Message-----
From: Linux-nvme [mailto:linux-nvme-bounces@lists.infradead.org] On Behalf Of Dan McLeran
Sent: Monday, March 31, 2014 11:57 AM
To: Busch, Keith
Cc: linux-nvme at lists.infradead.org
Subject: Re: [PATCH] NVMe: Increase shutdown complete time

Seems reasonable to me.

On Mon, 31 Mar 2014, Keith Busch wrote:

> The spec doesn't have a recommendation for shutdown beyond "that the 
> host wait a minimum of one second for the shutdown operations to 
> complete", so we need to choose an arbitrarily value so we don't wait 
> forever but high enough to prevent unsafe shutdowns. Some h/w vendors 
> say the previous two seconds is not long enough at some capacities. 
> Twenty seconds ought to be enough for anybody, right?
>
> Signed-off-by: Keith Busch <keith.busch at intel.com>
> ---
> drivers/block/nvme-core.c |    2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c 
> index 625259d..103da93 100644
> --- a/drivers/block/nvme-core.c
> +++ b/drivers/block/nvme-core.c
> @@ -1352,7 +1352,7 @@ static int nvme_shutdown_ctrl(struct nvme_dev *dev)
> 	cc = (readl(&dev->bar->cc) & ~NVME_CC_SHN_MASK) | NVME_CC_SHN_NORMAL;
> 	writel(cc, &dev->bar->cc);
>
> -	timeout = 2 * HZ + jiffies;
> +	timeout = 20 * HZ + jiffies;
> 	while ((readl(&dev->bar->csts) & NVME_CSTS_SHST_MASK) !=
> 							NVME_CSTS_SHST_CMPLT) {
> 		msleep(100);
> --
> 1.7.10.4
>
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://merlin.infradead.org/mailman/listinfo/linux-nvme
>

_______________________________________________
Linux-nvme mailing list
Linux-nvme at lists.infradead.org
http://merlin.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] NVMe: Increase shutdown complete time
  2014-03-31 20:37   ` Robles, Raymond C
@ 2014-03-31 23:57     ` Dan McLeran
  2014-04-03 22:55       ` Yung-Chin Chen
  0 siblings, 1 reply; 12+ messages in thread
From: Dan McLeran @ 2014-03-31 23:57 UTC (permalink / raw)


CAP.TO is defined as the worst case time the driver should wait for the 
controller to come ready. Not sure it can be used for shutdown as well. If 
so, then we should just read that value and use it as we do in 
nvme_wait_ready.

On Mon, 31 Mar 2014, Robles, Raymond C wrote:

> You're probably fine waiting anywhere up to CAP.TO.
>
> -----Original Message-----
> From: Linux-nvme [mailto:linux-nvme-bounces at lists.infradead.org] On Behalf Of Dan McLeran
> Sent: Monday, March 31, 2014 11:57 AM
> To: Busch, Keith
> Cc: linux-nvme at lists.infradead.org
> Subject: Re: [PATCH] NVMe: Increase shutdown complete time
>
> Seems reasonable to me.
>
> On Mon, 31 Mar 2014, Keith Busch wrote:
>
>> The spec doesn't have a recommendation for shutdown beyond "that the
>> host wait a minimum of one second for the shutdown operations to
>> complete", so we need to choose an arbitrarily value so we don't wait
>> forever but high enough to prevent unsafe shutdowns. Some h/w vendors
>> say the previous two seconds is not long enough at some capacities.
>> Twenty seconds ought to be enough for anybody, right?
>>
>> Signed-off-by: Keith Busch <keith.busch at intel.com>
>> ---
>> drivers/block/nvme-core.c |    2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
>> index 625259d..103da93 100644
>> --- a/drivers/block/nvme-core.c
>> +++ b/drivers/block/nvme-core.c
>> @@ -1352,7 +1352,7 @@ static int nvme_shutdown_ctrl(struct nvme_dev *dev)
>> 	cc = (readl(&dev->bar->cc) & ~NVME_CC_SHN_MASK) | NVME_CC_SHN_NORMAL;
>> 	writel(cc, &dev->bar->cc);
>>
>> -	timeout = 2 * HZ + jiffies;
>> +	timeout = 20 * HZ + jiffies;
>> 	while ((readl(&dev->bar->csts) & NVME_CSTS_SHST_MASK) !=
>> 							NVME_CSTS_SHST_CMPLT) {
>> 		msleep(100);
>> --
>> 1.7.10.4
>>
>>
>> _______________________________________________
>> Linux-nvme mailing list
>> Linux-nvme at lists.infradead.org
>> http://merlin.infradead.org/mailman/listinfo/linux-nvme
>>
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://merlin.infradead.org/mailman/listinfo/linux-nvme
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] NVMe: Increase shutdown complete time
  2014-03-31 23:57     ` Dan McLeran
@ 2014-04-03 22:55       ` Yung-Chin Chen
  2014-04-03 23:12         ` Keith Busch
  0 siblings, 1 reply; 12+ messages in thread
From: Yung-Chin Chen @ 2014-04-03 22:55 UTC (permalink / raw)


Hi, all:

Another issue we encountered is I/O timeout. I believe it is set to 5
seconds in the current  driver. When the system has a lot of queues,
each queue has a lot of entries, and each entry is a large size request,
it is very easy to exceed this number during benchmarking.  Take an
example, if an NVMe card supports 64 queues, 1K entries per queue and
1MB per request, there are about 64GB data ahead of a request when it is
put to the queue. The timer starts to tick. If the card can process
4GB/s, it still takes 16 seconds to process earlier requests. As a
result, the request will receive a timeout (EIO5) from the driver. I
understand in real application this is unlikely to happen. But it is not
uncommon during benchmarking.

Why do we choose 5 seconds? Are we able to make this parameter
configurable? Thanks.

Yung-Chin Chen
Greenliant Systems

-----Original Message-----
From: Linux-nvme [mailto:linux-nvme-bounces@lists.infradead.org] On
Behalf Of Dan McLeran
Sent: Monday, March 31, 2014 4:57 PM
To: Robles, Raymond C
Cc: Busch, Keith; linux-nvme at lists.infradead.org; Mcleran, Daniel
Subject: RE: [PATCH] NVMe: Increase shutdown complete time

CAP.TO is defined as the worst case time the driver should wait for the
controller to come ready. Not sure it can be used for shutdown as well.
If so, then we should just read that value and use it as we do in
nvme_wait_ready.

On Mon, 31 Mar 2014, Robles, Raymond C wrote:

> You're probably fine waiting anywhere up to CAP.TO.
>
> -----Original Message-----
> From: Linux-nvme [mailto:linux-nvme-bounces at lists.infradead.org] On 
> Behalf Of Dan McLeran
> Sent: Monday, March 31, 2014 11:57 AM
> To: Busch, Keith
> Cc: linux-nvme at lists.infradead.org
> Subject: Re: [PATCH] NVMe: Increase shutdown complete time
>
> Seems reasonable to me.
>
> On Mon, 31 Mar 2014, Keith Busch wrote:
>
>> The spec doesn't have a recommendation for shutdown beyond "that the 
>> host wait a minimum of one second for the shutdown operations to 
>> complete", so we need to choose an arbitrarily value so we don't wait

>> forever but high enough to prevent unsafe shutdowns. Some h/w vendors

>> say the previous two seconds is not long enough at some capacities.
>> Twenty seconds ought to be enough for anybody, right?
>>
>> Signed-off-by: Keith Busch <keith.busch at intel.com>
>> ---
>> drivers/block/nvme-core.c |    2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c 
>> index 625259d..103da93 100644
>> --- a/drivers/block/nvme-core.c
>> +++ b/drivers/block/nvme-core.c
>> @@ -1352,7 +1352,7 @@ static int nvme_shutdown_ctrl(struct nvme_dev
*dev)
>> 	cc = (readl(&dev->bar->cc) & ~NVME_CC_SHN_MASK) |
NVME_CC_SHN_NORMAL;
>> 	writel(cc, &dev->bar->cc);
>>
>> -	timeout = 2 * HZ + jiffies;
>> +	timeout = 20 * HZ + jiffies;
>> 	while ((readl(&dev->bar->csts) & NVME_CSTS_SHST_MASK) !=
>>
NVME_CSTS_SHST_CMPLT) {
>> 		msleep(100);
>> --
>> 1.7.10.4
>>
>>
>> _______________________________________________
>> Linux-nvme mailing list
>> Linux-nvme at lists.infradead.org
>> http://merlin.infradead.org/mailman/listinfo/linux-nvme
>>
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://merlin.infradead.org/mailman/listinfo/linux-nvme
>

_______________________________________________
Linux-nvme mailing list
Linux-nvme at lists.infradead.org
http://merlin.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] NVMe: Increase shutdown complete time
  2014-04-03 22:55       ` Yung-Chin Chen
@ 2014-04-03 23:12         ` Keith Busch
  2014-04-04  0:18           ` Yung-Chin Chen
                             ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Keith Busch @ 2014-04-03 23:12 UTC (permalink / raw)


On Thu, 3 Apr 2014, Yung-Chin Chen wrote:
> Why do we choose 5 seconds? Are we able to make this parameter
> configurable? Thanks.

This is the second time I've been asked this in two days. :)

Any thoughts on something like the following?

diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
index 59e2adcc..6ade8de 100644
--- a/drivers/block/nvme-core.c
+++ b/drivers/block/nvme-core.c
@@ -51,6 +51,10 @@
  #define ADMIN_TIMEOUT	(60 * HZ)

+int nvme_io_timeout = 5;
+module_param(nvme_io_timeout, int, 0);
+MODULE_PARM_DESC(nvme_io_timeout, "timeout in seconds for io submitted to queue");
+
  static int nvme_major;
  module_param(nvme_major, int, 0);

diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index 5993455..490488e 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -66,7 +66,8 @@ enum {

  #define NVME_VS(major, minor)	(major << 16 | minor)

-#define NVME_IO_TIMEOUT	(5 * HZ)
+extern int nvme_io_timeout;
+#define NVME_IO_TIMEOUT	(nvme_io_timeout * HZ)

  /*
   * Represents an NVM Express device.  Each nvme_dev is a PCI function.

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH] NVMe: Increase shutdown complete time
  2014-04-03 23:12         ` Keith Busch
@ 2014-04-04  0:18           ` Yung-Chin Chen
  2014-04-04 15:33             ` Keith Busch
  2014-04-04  3:23           ` Dan McLeran
  2014-04-04 16:31           ` Matthew Wilcox
  2 siblings, 1 reply; 12+ messages in thread
From: Yung-Chin Chen @ 2014-04-04  0:18 UTC (permalink / raw)


Thanks, Keith, for your prompt response. I do not know Linux driver that
well. If you use MODULE_PARM_DESC(), how do I specify the parameter for
bootable NVMe device? 

For example, RedHat 7.0 has built-in NVMe driver and supports bootable
from an NVMe device. The driver is buil-in to the kernel, and is not a
loadable module. Where can I specify the parameters for built-in driver?
Thanks.

Yung-Chin Chen
Greenliant Systems

-----Original Message-----
From: Keith Busch [mailto:keith.busch@intel.com] 
Sent: Thursday, April 03, 2014 4:13 PM
To: Yung-Chin Chen
Cc: Dan McLeran; Robles, Raymond C; Busch, Keith;
linux-nvme at lists.infradead.org
Subject: RE: [PATCH] NVMe: Increase shutdown complete time

On Thu, 3 Apr 2014, Yung-Chin Chen wrote:
> Why do we choose 5 seconds? Are we able to make this parameter 
> configurable? Thanks.

This is the second time I've been asked this in two days. :)

Any thoughts on something like the following?

diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c index
59e2adcc..6ade8de 100644
--- a/drivers/block/nvme-core.c
+++ b/drivers/block/nvme-core.c
@@ -51,6 +51,10 @@
  #define ADMIN_TIMEOUT	(60 * HZ)

+int nvme_io_timeout = 5;
+module_param(nvme_io_timeout, int, 0);
+MODULE_PARM_DESC(nvme_io_timeout, "timeout in seconds for io submitted 
+to queue");
+
  static int nvme_major;
  module_param(nvme_major, int, 0);

diff --git a/include/linux/nvme.h b/include/linux/nvme.h index
5993455..490488e 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -66,7 +66,8 @@ enum {

  #define NVME_VS(major, minor)	(major << 16 | minor)

-#define NVME_IO_TIMEOUT	(5 * HZ)
+extern int nvme_io_timeout;
+#define NVME_IO_TIMEOUT	(nvme_io_timeout * HZ)

  /*
   * Represents an NVM Express device.  Each nvme_dev is a PCI function.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] NVMe: Increase shutdown complete time
  2014-04-03 23:12         ` Keith Busch
  2014-04-04  0:18           ` Yung-Chin Chen
@ 2014-04-04  3:23           ` Dan McLeran
  2014-04-04 16:31           ` Matthew Wilcox
  2 siblings, 0 replies; 12+ messages in thread
From: Dan McLeran @ 2014-04-04  3:23 UTC (permalink / raw)


Probably want to use ulong if we're going to make this a module param.

On Thu, 3 Apr 2014, Keith Busch wrote:

> On Thu, 3 Apr 2014, Yung-Chin Chen wrote:
>> Why do we choose 5 seconds? Are we able to make this parameter
>> configurable? Thanks.
>
> This is the second time I've been asked this in two days. :)
>
> Any thoughts on something like the following?
>
> diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
> index 59e2adcc..6ade8de 100644
> --- a/drivers/block/nvme-core.c
> +++ b/drivers/block/nvme-core.c
> @@ -51,6 +51,10 @@
> #define ADMIN_TIMEOUT	(60 * HZ)
>
> +int nvme_io_timeout = 5;
> +module_param(nvme_io_timeout, int, 0);
> +MODULE_PARM_DESC(nvme_io_timeout, "timeout in seconds for io submitted to 
> queue");
> +
> static int nvme_major;
> module_param(nvme_major, int, 0);
>
> diff --git a/include/linux/nvme.h b/include/linux/nvme.h
> index 5993455..490488e 100644
> --- a/include/linux/nvme.h
> +++ b/include/linux/nvme.h
> @@ -66,7 +66,8 @@ enum {
>
> #define NVME_VS(major, minor)	(major << 16 | minor)
>
> -#define NVME_IO_TIMEOUT	(5 * HZ)
> +extern int nvme_io_timeout;
> +#define NVME_IO_TIMEOUT	(nvme_io_timeout * HZ)
>
> /*
>  * Represents an NVM Express device.  Each nvme_dev is a PCI function.
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] NVMe: Increase shutdown complete time
  2014-03-31 16:24 [PATCH] NVMe: Increase shutdown complete time Keith Busch
  2014-03-31 18:56 ` Dan McLeran
@ 2014-04-04 15:22 ` Laura Jessen-SSI
  2014-04-04 15:36   ` Keith Busch
  1 sibling, 1 reply; 12+ messages in thread
From: Laura Jessen-SSI @ 2014-04-04 15:22 UTC (permalink / raw)


How do I have someone in Samsung added to this email distribution list?



Thanks,

Laura


-----Original Message-----
From: Linux-nvme [mailto:linux-nvme-bounces@lists.infradead.org] On Behalf Of Keith Busch
Sent: Monday, March 31, 2014 9:24 AM
To: linux-nvme at lists.infradead.org
Cc: Keith Busch
Subject: [PATCH] NVMe: Increase shutdown complete time

The spec doesn't have a recommendation for shutdown beyond "that the host wait a minimum of one second for the shutdown operations to complete", so we need to choose an arbitrarily value so we don't wait forever but high enough to prevent unsafe shutdowns. Some h/w vendors say the previous two seconds is not long enough at some capacities. Twenty seconds ought to be enough for anybody, right?

Signed-off-by: Keith Busch <keith.busch at intel.com>
---
 drivers/block/nvme-core.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c index 625259d..103da93 100644
--- a/drivers/block/nvme-core.c
+++ b/drivers/block/nvme-core.c
@@ -1352,7 +1352,7 @@ static int nvme_shutdown_ctrl(struct nvme_dev *dev)
 	cc = (readl(&dev->bar->cc) & ~NVME_CC_SHN_MASK) | NVME_CC_SHN_NORMAL;
 	writel(cc, &dev->bar->cc);
 
-	timeout = 2 * HZ + jiffies;
+	timeout = 20 * HZ + jiffies;
 	while ((readl(&dev->bar->csts) & NVME_CSTS_SHST_MASK) !=
 							NVME_CSTS_SHST_CMPLT) {
 		msleep(100);
--
1.7.10.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme at lists.infradead.org
http://merlin.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] NVMe: Increase shutdown complete time
  2014-04-04  0:18           ` Yung-Chin Chen
@ 2014-04-04 15:33             ` Keith Busch
  0 siblings, 0 replies; 12+ messages in thread
From: Keith Busch @ 2014-04-04 15:33 UTC (permalink / raw)


On Thu, 3 Apr 2014, Yung-Chin Chen wrote:
> Thanks, Keith, for your prompt response. I do not know Linux driver that
> well. If you use MODULE_PARM_DESC(), how do I specify the parameter for
> bootable NVMe device?
>
> For example, RedHat 7.0 has built-in NVMe driver and supports bootable
> from an NVMe device. The driver is buil-in to the kernel, and is not a
> loadable module. Where can I specify the parameters for built-in driver?

Okay, I'm not overly familiar with RHEL7, but I believe the following
is the way. If the module is built in, we can specify module parameters
at the vmlinuz line from the boot loader. To make it permanent, you
can append:

   <module>.<module_param>=<value>

to the GRUB_CMDLINE_LINUX line in /etc/default/grub.  The proposed new
parameter would look like 'nvme.nvme_io_timeout=20' if you want a 20
second timeout. After you set it, save the file, then run:

   grub2-mkconfig --output=/boot/efi/EFI/redhat/grub.cfg

I assume you're efi booting since you mention booting from NVMe. If not
efi, the output would be in /boot/grub2/grub.cfg. The driver should then
use the new timeout value by default on each reboot after that.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] NVMe: Increase shutdown complete time
  2014-04-04 15:22 ` Laura Jessen-SSI
@ 2014-04-04 15:36   ` Keith Busch
  0 siblings, 0 replies; 12+ messages in thread
From: Keith Busch @ 2014-04-04 15:36 UTC (permalink / raw)


On Fri, 4 Apr 2014, Laura Jessen-SSI wrote:
> How do I have someone in Samsung added to this email distribution list?

Anyone can subscribe here:

http://merlin.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] NVMe: Increase shutdown complete time
  2014-04-03 23:12         ` Keith Busch
  2014-04-04  0:18           ` Yung-Chin Chen
  2014-04-04  3:23           ` Dan McLeran
@ 2014-04-04 16:31           ` Matthew Wilcox
  2 siblings, 0 replies; 12+ messages in thread
From: Matthew Wilcox @ 2014-04-04 16:31 UTC (permalink / raw)


On Thu, Apr 03, 2014@05:12:34PM -0600, Keith Busch wrote:
> +int nvme_io_timeout = 5;
> +module_param(nvme_io_timeout, int, 0);
> +MODULE_PARM_DESC(nvme_io_timeout, "timeout in seconds for io submitted to queue");

Any reason not to make this module_param 0644?  That would allow it to
be written by root and read by anybody.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2014-04-04 16:31 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-03-31 16:24 [PATCH] NVMe: Increase shutdown complete time Keith Busch
2014-03-31 18:56 ` Dan McLeran
2014-03-31 20:37   ` Robles, Raymond C
2014-03-31 23:57     ` Dan McLeran
2014-04-03 22:55       ` Yung-Chin Chen
2014-04-03 23:12         ` Keith Busch
2014-04-04  0:18           ` Yung-Chin Chen
2014-04-04 15:33             ` Keith Busch
2014-04-04  3:23           ` Dan McLeran
2014-04-04 16:31           ` Matthew Wilcox
2014-04-04 15:22 ` Laura Jessen-SSI
2014-04-04 15:36   ` Keith Busch

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.