All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] virtio-blk: Initialize blkqueue depth from virtqueue size
@ 2014-03-14 23:57 Venkatesh Srinivas
  2014-03-15  3:34   ` Theodore Ts'o
  0 siblings, 1 reply; 27+ messages in thread
From: Venkatesh Srinivas @ 2014-03-14 23:57 UTC (permalink / raw)
  To: tytso, rusty, mst, virtualization, fes

virtio-blk set the default queue depth to 64 requests, which was
insufficient for high-IOPS devices. Instead set the blk-queue depth to
the device's virtqueue depth divided by two (each I/O requires at least
two VQ entries).

Signed-off-by: Venkatesh Srinivas <venkateshs@google.com>
---
 drivers/block/virtio_blk.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index b1cb3f4..863ab02 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -485,7 +485,6 @@ static struct blk_mq_ops virtio_mq_ops = {
 static struct blk_mq_reg virtio_mq_reg = {
 	.ops		= &virtio_mq_ops,
 	.nr_hw_queues	= 1,
-	.queue_depth	= 64,
 	.numa_node	= NUMA_NO_NODE,
 	.flags		= BLK_MQ_F_SHOULD_MERGE,
 };
@@ -555,6 +554,7 @@ static int virtblk_probe(struct virtio_device *vdev)
 	virtio_mq_reg.cmd_size =
 		sizeof(struct virtblk_req) +
 		sizeof(struct scatterlist) * sg_elems;
+	virtio_mq_reg.queue_depth = vblk->vq->num_free / 2;
 
 	q = vblk->disk->queue = blk_mq_init_queue(&virtio_mq_reg, vblk);
 	if (!q) {
-- 
1.9.0.279.gdc9e3eb

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
  2014-03-14 23:57 [PATCH] virtio-blk: Initialize blkqueue depth from virtqueue size Venkatesh Srinivas
@ 2014-03-15  3:34   ` Theodore Ts'o
  0 siblings, 0 replies; 27+ messages in thread
From: Theodore Ts'o @ 2014-03-15  3:34 UTC (permalink / raw)
  To: Linux Kernel Developers List
  Cc: Theodore Ts'o, Venkatesh Srinivas, Rusty Russell,
	Michael S. Tsirkin, virtio-dev, virtualization, Frank Swiderski

The current virtio block sets a queue depth of 64, which is
insufficient for very fast devices.  It has been demonstrated that
with a high IOPS device, using a queue depth of 256 can double the
IOPS which can be sustained.

As suggested by Venkatash Srinivas, set the queue depth by default to
be one half the the device's virtqueue, which is the maximum queue
depth that can be supported by the channel to the host OS (each I/O
request requires at least two VQ entries).

Also allow the queue depth to be something which can be set at module
load time or via a kernel boot-time parameter, for
testing/benchmarking purposes.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Venkatesh Srinivas <venkateshs@google.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: virtio-dev@lists.oasis-open.org
Cc: virtualization@lists.linux-foundation.org
Cc: Frank Swiderski <fes@google.com>
---

This is a combination of my patch and Vekatash's patch.  I agree that
setting the default automatically is better than requiring the user to
set the value by hand.

 drivers/block/virtio_blk.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 6a680d4..0f70c01 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -481,6 +481,9 @@ static struct blk_mq_ops virtio_mq_ops = {
 	.free_hctx	= blk_mq_free_single_hw_queue,
 };
 
+static int queue_depth = -1;
+module_param(queue_depth, int, 0444);
+
 static struct blk_mq_reg virtio_mq_reg = {
 	.ops		= &virtio_mq_ops,
 	.nr_hw_queues	= 1,
@@ -551,9 +554,14 @@ static int virtblk_probe(struct virtio_device *vdev)
 		goto out_free_vq;
 	}
 
+	virtio_mq_reg.queue_depth = queue_depth > 0 ? queue_depth :
+		(vblk->vq->num_free / 2);
 	virtio_mq_reg.cmd_size =
 		sizeof(struct virtblk_req) +
 		sizeof(struct scatterlist) * sg_elems;
+	virtblk_name_format("vd", index, vblk->disk->disk_name, DISK_NAME_LEN);
+	pr_info("%s: using queue depth %d\n", vblk->disk->disk_name,
+		virtio_mq_reg.queue_depth);
 
 	q = vblk->disk->queue = blk_mq_init_queue(&virtio_mq_reg, vblk);
 	if (!q) {
@@ -565,8 +573,6 @@ static int virtblk_probe(struct virtio_device *vdev)
 
 	q->queuedata = vblk;
 
-	virtblk_name_format("vd", index, vblk->disk->disk_name, DISK_NAME_LEN);
-
 	vblk->disk->major = major;
 	vblk->disk->first_minor = index_to_minor(index);
 	vblk->disk->private_data = vblk;
-- 
1.9.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
@ 2014-03-15  3:34   ` Theodore Ts'o
  0 siblings, 0 replies; 27+ messages in thread
From: Theodore Ts'o @ 2014-03-15  3:34 UTC (permalink / raw)
  To: Linux Kernel Developers List
  Cc: Frank Swiderski, virtio-dev, Theodore Ts'o,
	Michael S. Tsirkin, virtualization

The current virtio block sets a queue depth of 64, which is
insufficient for very fast devices.  It has been demonstrated that
with a high IOPS device, using a queue depth of 256 can double the
IOPS which can be sustained.

As suggested by Venkatash Srinivas, set the queue depth by default to
be one half the the device's virtqueue, which is the maximum queue
depth that can be supported by the channel to the host OS (each I/O
request requires at least two VQ entries).

Also allow the queue depth to be something which can be set at module
load time or via a kernel boot-time parameter, for
testing/benchmarking purposes.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Venkatesh Srinivas <venkateshs@google.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: virtio-dev@lists.oasis-open.org
Cc: virtualization@lists.linux-foundation.org
Cc: Frank Swiderski <fes@google.com>
---

This is a combination of my patch and Vekatash's patch.  I agree that
setting the default automatically is better than requiring the user to
set the value by hand.

 drivers/block/virtio_blk.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 6a680d4..0f70c01 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -481,6 +481,9 @@ static struct blk_mq_ops virtio_mq_ops = {
 	.free_hctx	= blk_mq_free_single_hw_queue,
 };
 
+static int queue_depth = -1;
+module_param(queue_depth, int, 0444);
+
 static struct blk_mq_reg virtio_mq_reg = {
 	.ops		= &virtio_mq_ops,
 	.nr_hw_queues	= 1,
@@ -551,9 +554,14 @@ static int virtblk_probe(struct virtio_device *vdev)
 		goto out_free_vq;
 	}
 
+	virtio_mq_reg.queue_depth = queue_depth > 0 ? queue_depth :
+		(vblk->vq->num_free / 2);
 	virtio_mq_reg.cmd_size =
 		sizeof(struct virtblk_req) +
 		sizeof(struct scatterlist) * sg_elems;
+	virtblk_name_format("vd", index, vblk->disk->disk_name, DISK_NAME_LEN);
+	pr_info("%s: using queue depth %d\n", vblk->disk->disk_name,
+		virtio_mq_reg.queue_depth);
 
 	q = vblk->disk->queue = blk_mq_init_queue(&virtio_mq_reg, vblk);
 	if (!q) {
@@ -565,8 +573,6 @@ static int virtblk_probe(struct virtio_device *vdev)
 
 	q->queuedata = vblk;
 
-	virtblk_name_format("vd", index, vblk->disk->disk_name, DISK_NAME_LEN);
-
 	vblk->disk->major = major;
 	vblk->disk->first_minor = index_to_minor(index);
 	vblk->disk->private_data = vblk;
-- 
1.9.0

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
  2014-03-15  3:34   ` Theodore Ts'o
@ 2014-03-15 10:57     ` Konrad Rzeszutek Wilk
  -1 siblings, 0 replies; 27+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-03-15 10:57 UTC (permalink / raw)
  To: Theodore Ts'o, Linux Kernel Developers List
  Cc: Venkatesh Srinivas, Rusty Russell, Michael S. Tsirkin,
	virtio-dev, virtualization, Frank Swiderski

On March 14, 2014 11:34:31 PM EDT, Theodore Ts'o <tytso@mit.edu> wrote:
>The current virtio block sets a queue depth of 64, which is
>insufficient for very fast devices.  It has been demonstrated that
>with a high IOPS device, using a queue depth of 256 can double the
>IOPS which can be sustained.
>
>As suggested by Venkatash Srinivas, set the queue depth by default to
>be one half the the device's virtqueue, which is the maximum queue
>depth that can be supported by the channel to the host OS (each I/O
>request requires at least two VQ entries).
>
>Also allow the queue depth to be something which can be set at module
>load time or via a kernel boot-time parameter, for
>testing/benchmarking purposes.
>
>Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
>Signed-off-by: Venkatesh Srinivas <venkateshs@google.com>
>Cc: Rusty Russell <rusty@rustcorp.com.au>
>Cc: "Michael S. Tsirkin" <mst@redhat.com>
>Cc: virtio-dev@lists.oasis-open.org
>Cc: virtualization@lists.linux-foundation.org
>Cc: Frank Swiderski <fes@google.com>
>---
>
>This is a combination of my patch and Vekatash's patch.  I agree that
>setting the default automatically is better than requiring the user to
>set the value by hand.
>
> drivers/block/virtio_blk.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
>diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
>index 6a680d4..0f70c01 100644
>--- a/drivers/block/virtio_blk.c
>+++ b/drivers/block/virtio_blk.c
>@@ -481,6 +481,9 @@ static struct blk_mq_ops virtio_mq_ops = {
> 	.free_hctx	= blk_mq_free_single_hw_queue,
> };
> 
>+static int queue_depth = -1;
>+module_param(queue_depth, int, 0444);

?

>+
> static struct blk_mq_reg virtio_mq_reg = {
> 	.ops		= &virtio_mq_ops,
> 	.nr_hw_queues	= 1,
>@@ -551,9 +554,14 @@ static int virtblk_probe(struct virtio_device
>*vdev)
> 		goto out_free_vq;
> 	}
> 
>+	virtio_mq_reg.queue_depth = queue_depth > 0 ? queue_depth :
>+		(vblk->vq->num_free / 2);
> 	virtio_mq_reg.cmd_size =
> 		sizeof(struct virtblk_req) +
> 		sizeof(struct scatterlist) * sg_elems;
>+	virtblk_name_format("vd", index, vblk->disk->disk_name,
>DISK_NAME_LEN);
>+	pr_info("%s: using queue depth %d\n", vblk->disk->disk_name,
>+		virtio_mq_reg.queue_depth);

Isn't that visible from sysfs?
> 
> 	q = vblk->disk->queue = blk_mq_init_queue(&virtio_mq_reg, vblk);
> 	if (!q) {
>@@ -565,8 +573,6 @@ static int virtblk_probe(struct virtio_device
>*vdev)
> 
> 	q->queuedata = vblk;
> 
>-	virtblk_name_format("vd", index, vblk->disk->disk_name,
>DISK_NAME_LEN);
>-
> 	vblk->disk->major = major;
> 	vblk->disk->first_minor = index_to_minor(index);
> 	vblk->disk->private_data = vblk;



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
@ 2014-03-15 10:57     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 27+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-03-15 10:57 UTC (permalink / raw)
  To: Theodore Ts'o, Linux Kernel Developers List
  Cc: Frank Swiderski, virtio-dev, Michael S. Tsirkin, virtualization

On March 14, 2014 11:34:31 PM EDT, Theodore Ts'o <tytso@mit.edu> wrote:
>The current virtio block sets a queue depth of 64, which is
>insufficient for very fast devices.  It has been demonstrated that
>with a high IOPS device, using a queue depth of 256 can double the
>IOPS which can be sustained.
>
>As suggested by Venkatash Srinivas, set the queue depth by default to
>be one half the the device's virtqueue, which is the maximum queue
>depth that can be supported by the channel to the host OS (each I/O
>request requires at least two VQ entries).
>
>Also allow the queue depth to be something which can be set at module
>load time or via a kernel boot-time parameter, for
>testing/benchmarking purposes.
>
>Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
>Signed-off-by: Venkatesh Srinivas <venkateshs@google.com>
>Cc: Rusty Russell <rusty@rustcorp.com.au>
>Cc: "Michael S. Tsirkin" <mst@redhat.com>
>Cc: virtio-dev@lists.oasis-open.org
>Cc: virtualization@lists.linux-foundation.org
>Cc: Frank Swiderski <fes@google.com>
>---
>
>This is a combination of my patch and Vekatash's patch.  I agree that
>setting the default automatically is better than requiring the user to
>set the value by hand.
>
> drivers/block/virtio_blk.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
>diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
>index 6a680d4..0f70c01 100644
>--- a/drivers/block/virtio_blk.c
>+++ b/drivers/block/virtio_blk.c
>@@ -481,6 +481,9 @@ static struct blk_mq_ops virtio_mq_ops = {
> 	.free_hctx	= blk_mq_free_single_hw_queue,
> };
> 
>+static int queue_depth = -1;
>+module_param(queue_depth, int, 0444);

?

>+
> static struct blk_mq_reg virtio_mq_reg = {
> 	.ops		= &virtio_mq_ops,
> 	.nr_hw_queues	= 1,
>@@ -551,9 +554,14 @@ static int virtblk_probe(struct virtio_device
>*vdev)
> 		goto out_free_vq;
> 	}
> 
>+	virtio_mq_reg.queue_depth = queue_depth > 0 ? queue_depth :
>+		(vblk->vq->num_free / 2);
> 	virtio_mq_reg.cmd_size =
> 		sizeof(struct virtblk_req) +
> 		sizeof(struct scatterlist) * sg_elems;
>+	virtblk_name_format("vd", index, vblk->disk->disk_name,
>DISK_NAME_LEN);
>+	pr_info("%s: using queue depth %d\n", vblk->disk->disk_name,
>+		virtio_mq_reg.queue_depth);

Isn't that visible from sysfs?
> 
> 	q = vblk->disk->queue = blk_mq_init_queue(&virtio_mq_reg, vblk);
> 	if (!q) {
>@@ -565,8 +573,6 @@ static int virtblk_probe(struct virtio_device
>*vdev)
> 
> 	q->queuedata = vblk;
> 
>-	virtblk_name_format("vd", index, vblk->disk->disk_name,
>DISK_NAME_LEN);
>-
> 	vblk->disk->major = major;
> 	vblk->disk->first_minor = index_to_minor(index);
> 	vblk->disk->private_data = vblk;

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
  2014-03-15 10:57     ` Konrad Rzeszutek Wilk
@ 2014-03-15 13:20       ` Theodore Ts'o
  -1 siblings, 0 replies; 27+ messages in thread
From: Theodore Ts'o @ 2014-03-15 13:20 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Linux Kernel Developers List, Venkatesh Srinivas, Rusty Russell,
	Michael S. Tsirkin, virtio-dev, virtualization, Frank Swiderski

On Sat, Mar 15, 2014 at 06:57:01AM -0400, Konrad Rzeszutek Wilk wrote:
> >+	pr_info("%s: using queue depth %d\n", vblk->disk->disk_name,
> >+		virtio_mq_reg.queue_depth);
> 
> Isn't that visible from sysfs?

As near as I can tell, it's not.  I haven't been able to find anything
that either represents this value, or can be calculated from this
value.  Maybe I missed something?

						- Ted

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
@ 2014-03-15 13:20       ` Theodore Ts'o
  0 siblings, 0 replies; 27+ messages in thread
From: Theodore Ts'o @ 2014-03-15 13:20 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Frank Swiderski, virtio-dev, Michael S. Tsirkin,
	Linux Kernel Developers List, virtualization

On Sat, Mar 15, 2014 at 06:57:01AM -0400, Konrad Rzeszutek Wilk wrote:
> >+	pr_info("%s: using queue depth %d\n", vblk->disk->disk_name,
> >+		virtio_mq_reg.queue_depth);
> 
> Isn't that visible from sysfs?

As near as I can tell, it's not.  I haven't been able to find anything
that either represents this value, or can be calculated from this
value.  Maybe I missed something?

						- Ted

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
  2014-03-15  3:34   ` Theodore Ts'o
@ 2014-03-15 13:57     ` Christoph Hellwig
  -1 siblings, 0 replies; 27+ messages in thread
From: Christoph Hellwig @ 2014-03-15 13:57 UTC (permalink / raw)
  To: Theodore Ts'o
  Cc: Linux Kernel Developers List, Venkatesh Srinivas, Rusty Russell,
	Michael S. Tsirkin, virtio-dev, virtualization, Frank Swiderski

On Fri, Mar 14, 2014 at 11:34:31PM -0400, Theodore Ts'o wrote:
> The current virtio block sets a queue depth of 64, which is
> insufficient for very fast devices.  It has been demonstrated that
> with a high IOPS device, using a queue depth of 256 can double the
> IOPS which can be sustained.
> 
> As suggested by Venkatash Srinivas, set the queue depth by default to
> be one half the the device's virtqueue, which is the maximum queue
> depth that can be supported by the channel to the host OS (each I/O
> request requires at least two VQ entries).

I don't think this should be a module parameter.  The default sizing
should be based of the parameters of the actual virtqueue, and if we
want to allow tuning it it should be by a sysfs attribute, preferable
using the same semantics as SCSI.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
@ 2014-03-15 13:57     ` Christoph Hellwig
  0 siblings, 0 replies; 27+ messages in thread
From: Christoph Hellwig @ 2014-03-15 13:57 UTC (permalink / raw)
  To: Theodore Ts'o
  Cc: Frank Swiderski, virtio-dev, Michael S. Tsirkin,
	Linux Kernel Developers List, virtualization

On Fri, Mar 14, 2014 at 11:34:31PM -0400, Theodore Ts'o wrote:
> The current virtio block sets a queue depth of 64, which is
> insufficient for very fast devices.  It has been demonstrated that
> with a high IOPS device, using a queue depth of 256 can double the
> IOPS which can be sustained.
> 
> As suggested by Venkatash Srinivas, set the queue depth by default to
> be one half the the device's virtqueue, which is the maximum queue
> depth that can be supported by the channel to the host OS (each I/O
> request requires at least two VQ entries).

I don't think this should be a module parameter.  The default sizing
should be based of the parameters of the actual virtqueue, and if we
want to allow tuning it it should be by a sysfs attribute, preferable
using the same semantics as SCSI.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
  2014-03-15 13:57     ` Christoph Hellwig
@ 2014-03-15 15:13       ` Theodore Ts'o
  -1 siblings, 0 replies; 27+ messages in thread
From: Theodore Ts'o @ 2014-03-15 15:13 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Linux Kernel Developers List, Venkatesh Srinivas, Rusty Russell,
	Michael S. Tsirkin, virtio-dev, virtualization, Frank Swiderski

On Sat, Mar 15, 2014 at 06:57:23AM -0700, Christoph Hellwig wrote:
> I don't think this should be a module parameter.  The default sizing
> should be based of the parameters of the actual virtqueue, and if we
> want to allow tuning it it should be by a sysfs attribute, preferable
> using the same semantics as SCSI.

I wanted that too, but looking at the multiqueue code, it wasn't all
obvious how to safely adjust the queue depth once the virtio-blk
device driver is initialized and becomes active.  There are all sorts
data structures including bitmaps, etc. that would have to be resized,
and I decided it would be too difficult / risky for me to make it be
dynamically resizeable.

So I settled on a module parameter thinking it would mostly only used
by testers / benchmarkers.

Can someone suggest a way to do a dynamic resizing of the virtio-blk
queue depth easily / safely?

    	    	      	     	- Ted

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
@ 2014-03-15 15:13       ` Theodore Ts'o
  0 siblings, 0 replies; 27+ messages in thread
From: Theodore Ts'o @ 2014-03-15 15:13 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Frank Swiderski, virtio-dev, Michael S. Tsirkin,
	Linux Kernel Developers List, virtualization

On Sat, Mar 15, 2014 at 06:57:23AM -0700, Christoph Hellwig wrote:
> I don't think this should be a module parameter.  The default sizing
> should be based of the parameters of the actual virtqueue, and if we
> want to allow tuning it it should be by a sysfs attribute, preferable
> using the same semantics as SCSI.

I wanted that too, but looking at the multiqueue code, it wasn't all
obvious how to safely adjust the queue depth once the virtio-blk
device driver is initialized and becomes active.  There are all sorts
data structures including bitmaps, etc. that would have to be resized,
and I decided it would be too difficult / risky for me to make it be
dynamically resizeable.

So I settled on a module parameter thinking it would mostly only used
by testers / benchmarkers.

Can someone suggest a way to do a dynamic resizing of the virtio-blk
queue depth easily / safely?

    	    	      	     	- Ted

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
  2014-03-15  3:34   ` Theodore Ts'o
@ 2014-03-17  0:42     ` Rusty Russell
  -1 siblings, 0 replies; 27+ messages in thread
From: Rusty Russell @ 2014-03-17  0:42 UTC (permalink / raw)
  To: Theodore Ts'o, Linux Kernel Developers List
  Cc: Theodore Ts'o, Venkatesh Srinivas, Michael S. Tsirkin,
	virtio-dev, virtualization, Frank Swiderski

Theodore Ts'o <tytso@mit.edu> writes:
> The current virtio block sets a queue depth of 64, which is
> insufficient for very fast devices.  It has been demonstrated that
> with a high IOPS device, using a queue depth of 256 can double the
> IOPS which can be sustained.
>
> As suggested by Venkatash Srinivas, set the queue depth by default to
> be one half the the device's virtqueue, which is the maximum queue
> depth that can be supported by the channel to the host OS (each I/O
> request requires at least two VQ entries).
>
> Also allow the queue depth to be something which can be set at module
> load time or via a kernel boot-time parameter, for
> testing/benchmarking purposes.

Note that with indirect descriptors (which is supported by Almost
Everyone), we can actually use the full index, so this value is a bit
pessimistic.  But it's OK as a starting point.

Cheers,
Rusty.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
@ 2014-03-17  0:42     ` Rusty Russell
  0 siblings, 0 replies; 27+ messages in thread
From: Rusty Russell @ 2014-03-17  0:42 UTC (permalink / raw)
  To: Linux Kernel Developers List
  Cc: Frank Swiderski, virtio-dev, Theodore Ts'o,
	Michael S. Tsirkin, virtualization

Theodore Ts'o <tytso@mit.edu> writes:
> The current virtio block sets a queue depth of 64, which is
> insufficient for very fast devices.  It has been demonstrated that
> with a high IOPS device, using a queue depth of 256 can double the
> IOPS which can be sustained.
>
> As suggested by Venkatash Srinivas, set the queue depth by default to
> be one half the the device's virtqueue, which is the maximum queue
> depth that can be supported by the channel to the host OS (each I/O
> request requires at least two VQ entries).
>
> Also allow the queue depth to be something which can be set at module
> load time or via a kernel boot-time parameter, for
> testing/benchmarking purposes.

Note that with indirect descriptors (which is supported by Almost
Everyone), we can actually use the full index, so this value is a bit
pessimistic.  But it's OK as a starting point.

Cheers,
Rusty.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
  2014-03-17  0:42     ` Rusty Russell
@ 2014-03-17  5:40       ` tytso
  -1 siblings, 0 replies; 27+ messages in thread
From: tytso @ 2014-03-17  5:40 UTC (permalink / raw)
  To: Rusty Russell
  Cc: Linux Kernel Developers List, Venkatesh Srinivas,
	Michael S. Tsirkin, virtio-dev, virtualization, Frank Swiderski

On Mon, Mar 17, 2014 at 11:12:15AM +1030, Rusty Russell wrote:
> 
> Note that with indirect descriptors (which is supported by Almost
> Everyone), we can actually use the full index, so this value is a bit
> pessimistic.  But it's OK as a starting point.

So is this something that can go upstream with perhaps a slight
adjustment in the commit description?  Do you think we need to be able
to dynamically adjust the queue depth after the module has been loaded
or the kernel has been booted?  If so, anyone a hint about the best
way to do that would be much appreciated.

Thanks,

					- Ted

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
@ 2014-03-17  5:40       ` tytso
  0 siblings, 0 replies; 27+ messages in thread
From: tytso @ 2014-03-17  5:40 UTC (permalink / raw)
  To: Rusty Russell
  Cc: Frank Swiderski, virtio-dev, Michael S. Tsirkin,
	Linux Kernel Developers List, virtualization

On Mon, Mar 17, 2014 at 11:12:15AM +1030, Rusty Russell wrote:
> 
> Note that with indirect descriptors (which is supported by Almost
> Everyone), we can actually use the full index, so this value is a bit
> pessimistic.  But it's OK as a starting point.

So is this something that can go upstream with perhaps a slight
adjustment in the commit description?  Do you think we need to be able
to dynamically adjust the queue depth after the module has been loaded
or the kernel has been booted?  If so, anyone a hint about the best
way to do that would be much appreciated.

Thanks,

					- Ted

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
  2014-03-17  5:40       ` tytso
@ 2014-03-19  6:28         ` Rusty Russell
  -1 siblings, 0 replies; 27+ messages in thread
From: Rusty Russell @ 2014-03-19  6:28 UTC (permalink / raw)
  To: tytso
  Cc: Linux Kernel Developers List, Venkatesh Srinivas,
	Michael S. Tsirkin, virtio-dev, virtualization, Frank Swiderski

tytso@mit.edu writes:
> On Mon, Mar 17, 2014 at 11:12:15AM +1030, Rusty Russell wrote:
>> 
>> Note that with indirect descriptors (which is supported by Almost
>> Everyone), we can actually use the full index, so this value is a bit
>> pessimistic.  But it's OK as a starting point.
>
> So is this something that can go upstream with perhaps a slight
> adjustment in the commit description?

Well, I rewrote it again, see below.

> Do you think we need to be able
> to dynamically adjust the queue depth after the module has been loaded
> or the kernel has been booted?

That would be nice, sure, but...

> If so, anyone a hint about the best
> way to do that would be much appreciated.

... I share your wonder and mystery at the ways of the block layer.

Subject: virtio-blk: base queue-depth on virtqueue ringsize or module param

Venkatash spake thus:

  virtio-blk set the default queue depth to 64 requests, which was
  insufficient for high-IOPS devices. Instead set the blk-queue depth to
  the device's virtqueue depth divided by two (each I/O requires at least
  two VQ entries).

But behold, Ted added a module parameter:

  Also allow the queue depth to be something which can be set at module
  load time or via a kernel boot-time parameter, for
  testing/benchmarking purposes.

And I rewrote it substantially, mainly to take
VIRTIO_RING_F_INDIRECT_DESC into account.

As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
have made a change.  This version does (since QEMU also offers
VIRTIO_RING_F_INDIRECT_DESC.

Inspired-by: "Theodore Ts'o" <tytso@mit.edu>
Based-on-the-true-story-of: Venkatesh Srinivas <venkateshs@google.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: virtio-dev@lists.oasis-open.org
Cc: virtualization@lists.linux-foundation.org
Cc: Frank Swiderski <fes@google.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index a2db9ed288f2..c101bbc72095 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -491,10 +491,11 @@ static struct blk_mq_ops virtio_mq_ops = {
 static struct blk_mq_reg virtio_mq_reg = {
 	.ops		= &virtio_mq_ops,
 	.nr_hw_queues	= 1,
-	.queue_depth	= 64,
+	.queue_depth	= 0, /* Set in virtblk_probe */
 	.numa_node	= NUMA_NO_NODE,
 	.flags		= BLK_MQ_F_SHOULD_MERGE,
 };
+module_param_named(queue_depth, virtio_mq_reg.queue_depth, uint, 0444);
 
 static void virtblk_init_vbr(void *data, struct blk_mq_hw_ctx *hctx,
 			     struct request *rq, unsigned int nr)
@@ -558,6 +559,13 @@ static int virtblk_probe(struct virtio_device *vdev)
 		goto out_free_vq;
 	}
 
+	/* Default queue sizing is to fill the ring. */
+	if (!virtio_mq_reg.queue_depth) {
+		virtio_mq_reg.queue_depth = vblk->vq->num_free;
+		/* ... but without indirect descs, we use 2 descs per req */
+		if (!virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC))
+			virtio_mq_reg.queue_depth /= 2;
+	}
 	virtio_mq_reg.cmd_size =
 		sizeof(struct virtblk_req) +
 		sizeof(struct scatterlist) * sg_elems;

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
@ 2014-03-19  6:28         ` Rusty Russell
  0 siblings, 0 replies; 27+ messages in thread
From: Rusty Russell @ 2014-03-19  6:28 UTC (permalink / raw)
  To: tytso
  Cc: Frank Swiderski, virtio-dev, Michael S. Tsirkin,
	Linux Kernel Developers List, virtualization

tytso@mit.edu writes:
> On Mon, Mar 17, 2014 at 11:12:15AM +1030, Rusty Russell wrote:
>> 
>> Note that with indirect descriptors (which is supported by Almost
>> Everyone), we can actually use the full index, so this value is a bit
>> pessimistic.  But it's OK as a starting point.
>
> So is this something that can go upstream with perhaps a slight
> adjustment in the commit description?

Well, I rewrote it again, see below.

> Do you think we need to be able
> to dynamically adjust the queue depth after the module has been loaded
> or the kernel has been booted?

That would be nice, sure, but...

> If so, anyone a hint about the best
> way to do that would be much appreciated.

... I share your wonder and mystery at the ways of the block layer.

Subject: virtio-blk: base queue-depth on virtqueue ringsize or module param

Venkatash spake thus:

  virtio-blk set the default queue depth to 64 requests, which was
  insufficient for high-IOPS devices. Instead set the blk-queue depth to
  the device's virtqueue depth divided by two (each I/O requires at least
  two VQ entries).

But behold, Ted added a module parameter:

  Also allow the queue depth to be something which can be set at module
  load time or via a kernel boot-time parameter, for
  testing/benchmarking purposes.

And I rewrote it substantially, mainly to take
VIRTIO_RING_F_INDIRECT_DESC into account.

As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
have made a change.  This version does (since QEMU also offers
VIRTIO_RING_F_INDIRECT_DESC.

Inspired-by: "Theodore Ts'o" <tytso@mit.edu>
Based-on-the-true-story-of: Venkatesh Srinivas <venkateshs@google.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: virtio-dev@lists.oasis-open.org
Cc: virtualization@lists.linux-foundation.org
Cc: Frank Swiderski <fes@google.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index a2db9ed288f2..c101bbc72095 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -491,10 +491,11 @@ static struct blk_mq_ops virtio_mq_ops = {
 static struct blk_mq_reg virtio_mq_reg = {
 	.ops		= &virtio_mq_ops,
 	.nr_hw_queues	= 1,
-	.queue_depth	= 64,
+	.queue_depth	= 0, /* Set in virtblk_probe */
 	.numa_node	= NUMA_NO_NODE,
 	.flags		= BLK_MQ_F_SHOULD_MERGE,
 };
+module_param_named(queue_depth, virtio_mq_reg.queue_depth, uint, 0444);
 
 static void virtblk_init_vbr(void *data, struct blk_mq_hw_ctx *hctx,
 			     struct request *rq, unsigned int nr)
@@ -558,6 +559,13 @@ static int virtblk_probe(struct virtio_device *vdev)
 		goto out_free_vq;
 	}
 
+	/* Default queue sizing is to fill the ring. */
+	if (!virtio_mq_reg.queue_depth) {
+		virtio_mq_reg.queue_depth = vblk->vq->num_free;
+		/* ... but without indirect descs, we use 2 descs per req */
+		if (!virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC))
+			virtio_mq_reg.queue_depth /= 2;
+	}
 	virtio_mq_reg.cmd_size =
 		sizeof(struct virtblk_req) +
 		sizeof(struct scatterlist) * sg_elems;

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
  2014-03-19  6:28         ` Rusty Russell
@ 2014-03-19 17:48           ` Venkatesh Srinivas
  -1 siblings, 0 replies; 27+ messages in thread
From: Venkatesh Srinivas @ 2014-03-19 17:48 UTC (permalink / raw)
  To: Rusty Russell
  Cc: tytso, Linux Kernel Developers List, Michael S. Tsirkin,
	virtio-dev, virtualization, Frank Swiderski

> And I rewrote it substantially, mainly to take
> VIRTIO_RING_F_INDIRECT_DESC into account.
>
> As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
> have made a change.  This version does (since QEMU also offers
> VIRTIO_RING_F_INDIRECT_DESC.

That divide-by-2 produced the same queue depth as the prior
computation in QEMU was deliberate -- but raising it to 128 seems
pretty reasonable.

Signed-off-by: Venkatesh Srinivas <venkateshs@google.com>

-- vs;

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
@ 2014-03-19 17:48           ` Venkatesh Srinivas
  0 siblings, 0 replies; 27+ messages in thread
From: Venkatesh Srinivas @ 2014-03-19 17:48 UTC (permalink / raw)
  To: Rusty Russell
  Cc: Frank Swiderski, virtio-dev, tytso, Michael S. Tsirkin,
	Linux Kernel Developers List, virtualization

> And I rewrote it substantially, mainly to take
> VIRTIO_RING_F_INDIRECT_DESC into account.
>
> As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
> have made a change.  This version does (since QEMU also offers
> VIRTIO_RING_F_INDIRECT_DESC.

That divide-by-2 produced the same queue depth as the prior
computation in QEMU was deliberate -- but raising it to 128 seems
pretty reasonable.

Signed-off-by: Venkatesh Srinivas <venkateshs@google.com>

-- vs;

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
  2014-03-19 17:48           ` Venkatesh Srinivas
@ 2014-03-25 18:50             ` Venkatesh Srinivas
  -1 siblings, 0 replies; 27+ messages in thread
From: Venkatesh Srinivas @ 2014-03-25 18:50 UTC (permalink / raw)
  To: Rusty Russell
  Cc: Theodore Ts'o, Linux Kernel Developers List,
	Michael S. Tsirkin, virtualization

On Wed, Mar 19, 2014 at 10:48 AM, Venkatesh Srinivas
<venkateshs@google.com> wrote:
>> And I rewrote it substantially, mainly to take
>> VIRTIO_RING_F_INDIRECT_DESC into account.
>>
>> As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
>> have made a change.  This version does (since QEMU also offers
>> VIRTIO_RING_F_INDIRECT_DESC.
>
> That divide-by-2 produced the same queue depth as the prior
> computation in QEMU was deliberate -- but raising it to 128 seems
> pretty reasonable.
>
> Signed-off-by: Venkatesh Srinivas <venkateshs@google.com>

Soft ping about this patch.

Thanks,
-- vs;

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
@ 2014-03-25 18:50             ` Venkatesh Srinivas
  0 siblings, 0 replies; 27+ messages in thread
From: Venkatesh Srinivas @ 2014-03-25 18:50 UTC (permalink / raw)
  To: Rusty Russell
  Cc: virtualization, Theodore Ts'o, Linux Kernel Developers List,
	Michael S. Tsirkin

On Wed, Mar 19, 2014 at 10:48 AM, Venkatesh Srinivas
<venkateshs@google.com> wrote:
>> And I rewrote it substantially, mainly to take
>> VIRTIO_RING_F_INDIRECT_DESC into account.
>>
>> As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
>> have made a change.  This version does (since QEMU also offers
>> VIRTIO_RING_F_INDIRECT_DESC.
>
> That divide-by-2 produced the same queue depth as the prior
> computation in QEMU was deliberate -- but raising it to 128 seems
> pretty reasonable.
>
> Signed-off-by: Venkatesh Srinivas <venkateshs@google.com>

Soft ping about this patch.

Thanks,
-- vs;

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
  2014-03-25 18:50             ` Venkatesh Srinivas
@ 2014-03-31  3:52               ` Rusty Russell
  -1 siblings, 0 replies; 27+ messages in thread
From: Rusty Russell @ 2014-03-31  3:52 UTC (permalink / raw)
  To: Venkatesh Srinivas
  Cc: Theodore Ts'o, Linux Kernel Developers List,
	Michael S. Tsirkin, virtualization

Venkatesh Srinivas <venkateshs@google.com> writes:
> On Wed, Mar 19, 2014 at 10:48 AM, Venkatesh Srinivas
> <venkateshs@google.com> wrote:
>>> And I rewrote it substantially, mainly to take
>>> VIRTIO_RING_F_INDIRECT_DESC into account.
>>>
>>> As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
>>> have made a change.  This version does (since QEMU also offers
>>> VIRTIO_RING_F_INDIRECT_DESC.
>>
>> That divide-by-2 produced the same queue depth as the prior
>> computation in QEMU was deliberate -- but raising it to 128 seems
>> pretty reasonable.
>>
>> Signed-off-by: Venkatesh Srinivas <venkateshs@google.com>
>
> Soft ping about this patch.

It's head of my virtio-next tree.

Cheers,
Rusty.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
@ 2014-03-31  3:52               ` Rusty Russell
  0 siblings, 0 replies; 27+ messages in thread
From: Rusty Russell @ 2014-03-31  3:52 UTC (permalink / raw)
  To: Venkatesh Srinivas
  Cc: virtualization, Theodore Ts'o, Linux Kernel Developers List,
	Michael S. Tsirkin

Venkatesh Srinivas <venkateshs@google.com> writes:
> On Wed, Mar 19, 2014 at 10:48 AM, Venkatesh Srinivas
> <venkateshs@google.com> wrote:
>>> And I rewrote it substantially, mainly to take
>>> VIRTIO_RING_F_INDIRECT_DESC into account.
>>>
>>> As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
>>> have made a change.  This version does (since QEMU also offers
>>> VIRTIO_RING_F_INDIRECT_DESC.
>>
>> That divide-by-2 produced the same queue depth as the prior
>> computation in QEMU was deliberate -- but raising it to 128 seems
>> pretty reasonable.
>>
>> Signed-off-by: Venkatesh Srinivas <venkateshs@google.com>
>
> Soft ping about this patch.

It's head of my virtio-next tree.

Cheers,
Rusty.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
  2014-03-31  3:52               ` Rusty Russell
  (?)
@ 2014-04-01  2:27               ` Theodore Ts'o
  2014-04-01 10:49                 ` Stefan Hajnoczi
  -1 siblings, 1 reply; 27+ messages in thread
From: Theodore Ts'o @ 2014-04-01  2:27 UTC (permalink / raw)
  To: Rusty Russell
  Cc: Venkatesh Srinivas, Linux Kernel Developers List,
	Michael S. Tsirkin, virtualization

On Mon, Mar 31, 2014 at 02:22:50PM +1030, Rusty Russell wrote:
> 
> It's head of my virtio-next tree.

Hey Rusty,

While we have your attention --- what's your opinion about adding TRIM
support to virtio-blk.  I understand that you're starting an OASIS
standardization process for virtio --- what does that mean vis-a-vis a
patch to plumb discard support through virtio-blk?

Thanks!

						- Ted

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
  2014-03-31  3:52               ` Rusty Russell
  (?)
  (?)
@ 2014-04-01  2:27               ` Theodore Ts'o
  -1 siblings, 0 replies; 27+ messages in thread
From: Theodore Ts'o @ 2014-04-01  2:27 UTC (permalink / raw)
  To: Rusty Russell
  Cc: Michael S. Tsirkin, virtualization, Linux Kernel Developers List

On Mon, Mar 31, 2014 at 02:22:50PM +1030, Rusty Russell wrote:
> 
> It's head of my virtio-next tree.

Hey Rusty,

While we have your attention --- what's your opinion about adding TRIM
support to virtio-blk.  I understand that you're starting an OASIS
standardization process for virtio --- what does that mean vis-a-vis a
patch to plumb discard support through virtio-blk?

Thanks!

						- Ted

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
  2014-04-01  2:27               ` Theodore Ts'o
@ 2014-04-01 10:49                 ` Stefan Hajnoczi
  2014-04-02  7:36                   ` Rusty Russell
  0 siblings, 1 reply; 27+ messages in thread
From: Stefan Hajnoczi @ 2014-04-01 10:49 UTC (permalink / raw)
  To: Theodore Ts'o, Rusty Russell, Venkatesh Srinivas,
	Linux Kernel Developers List, Michael S. Tsirkin,
	Linux Virtualization

On Tue, Apr 1, 2014 at 4:27 AM, Theodore Ts'o <tytso@mit.edu> wrote:
> On Mon, Mar 31, 2014 at 02:22:50PM +1030, Rusty Russell wrote:
>>
>> It's head of my virtio-next tree.
>
> Hey Rusty,
>
> While we have your attention --- what's your opinion about adding TRIM
> support to virtio-blk.  I understand that you're starting an OASIS
> standardization process for virtio --- what does that mean vis-a-vis a
> patch to plumb discard support through virtio-blk?

virtio-scsi already supports discard.  But maybe you cannot switch
away from virtio-blk?

If you need to add discard to virtio-blk then it could be added to the
standard.  The standards text is kept in a svn repository here:
https://tools.oasis-open.org/version-control/browse/wsvn/virtio/

Stefan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
  2014-04-01 10:49                 ` Stefan Hajnoczi
@ 2014-04-02  7:36                   ` Rusty Russell
  0 siblings, 0 replies; 27+ messages in thread
From: Rusty Russell @ 2014-04-02  7:36 UTC (permalink / raw)
  To: Stefan Hajnoczi, Theodore Ts'o, Venkatesh Srinivas,
	Linux Kernel Developers List, Michael S. Tsirkin,
	Linux Virtualization

Stefan Hajnoczi <stefanha@gmail.com> writes:
> On Tue, Apr 1, 2014 at 4:27 AM, Theodore Ts'o <tytso@mit.edu> wrote:
>> On Mon, Mar 31, 2014 at 02:22:50PM +1030, Rusty Russell wrote:
>>>
>>> It's head of my virtio-next tree.
>>
>> Hey Rusty,
>>
>> While we have your attention --- what's your opinion about adding TRIM
>> support to virtio-blk.  I understand that you're starting an OASIS
>> standardization process for virtio --- what does that mean vis-a-vis a
>> patch to plumb discard support through virtio-blk?
>
> virtio-scsi already supports discard.  But maybe you cannot switch
> away from virtio-blk?
>
> If you need to add discard to virtio-blk then it could be added to the
> standard.  The standards text is kept in a svn repository here:
> https://tools.oasis-open.org/version-control/browse/wsvn/virtio/

It would be trivial to add, and I wouldn't be completely opposed, but we
generally point to virtio-scsi when people want actual features.

Cheers,
Rusty.

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2014-04-04  4:50 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-03-14 23:57 [PATCH] virtio-blk: Initialize blkqueue depth from virtqueue size Venkatesh Srinivas
2014-03-15  3:34 ` [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor Theodore Ts'o
2014-03-15  3:34   ` Theodore Ts'o
2014-03-15 10:57   ` Konrad Rzeszutek Wilk
2014-03-15 10:57     ` Konrad Rzeszutek Wilk
2014-03-15 13:20     ` Theodore Ts'o
2014-03-15 13:20       ` Theodore Ts'o
2014-03-15 13:57   ` Christoph Hellwig
2014-03-15 13:57     ` Christoph Hellwig
2014-03-15 15:13     ` Theodore Ts'o
2014-03-15 15:13       ` Theodore Ts'o
2014-03-17  0:42   ` Rusty Russell
2014-03-17  0:42     ` Rusty Russell
2014-03-17  5:40     ` tytso
2014-03-17  5:40       ` tytso
2014-03-19  6:28       ` Rusty Russell
2014-03-19  6:28         ` Rusty Russell
2014-03-19 17:48         ` Venkatesh Srinivas
2014-03-19 17:48           ` Venkatesh Srinivas
2014-03-25 18:50           ` Venkatesh Srinivas
2014-03-25 18:50             ` Venkatesh Srinivas
2014-03-31  3:52             ` Rusty Russell
2014-03-31  3:52               ` Rusty Russell
2014-04-01  2:27               ` Theodore Ts'o
2014-04-01 10:49                 ` Stefan Hajnoczi
2014-04-02  7:36                   ` Rusty Russell
2014-04-01  2:27               ` Theodore Ts'o

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.