[net-next,RFC,V5,3/5] virtio: intorduce an API to set affinity for a virtqueue
diff mbox series

Message ID 1341484194-8108-4-git-send-email-jasowang@redhat.com
State New, archived
Headers show
Series
  • Multiqueue virtio-net
Related show

Commit Message

Jason Wang July 5, 2012, 10:29 a.m. UTC
Sometimes, virtio device need to configure irq affiniry hint to maximize the
performance. Instead of just exposing the irq of a virtqueue, this patch
introduce an API to set the affinity for a virtqueue.

The api is best-effort, the affinity hint may not be set as expected due to
platform support, irq sharing or irq type. Currently, only pci method were
implemented and we set the affinity according to:

- if device uses INTX, we just ignore the request
- if device has per vq vector, we force the affinity hint
- if the virtqueues share MSI, make the affinity OR over all affinities
 requested

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/virtio/virtio_pci.c   |   46 +++++++++++++++++++++++++++++++++++++++++
 include/linux/virtio_config.h |   21 ++++++++++++++++++
 2 files changed, 67 insertions(+), 0 deletions(-)

Comments

Paolo Bonzini July 27, 2012, 2:38 p.m. UTC | #1
Il 05/07/2012 12:29, Jason Wang ha scritto:
> Sometimes, virtio device need to configure irq affiniry hint to maximize the
> performance. Instead of just exposing the irq of a virtqueue, this patch
> introduce an API to set the affinity for a virtqueue.
> 
> The api is best-effort, the affinity hint may not be set as expected due to
> platform support, irq sharing or irq type. Currently, only pci method were
> implemented and we set the affinity according to:
> 
> - if device uses INTX, we just ignore the request
> - if device has per vq vector, we force the affinity hint
> - if the virtqueues share MSI, make the affinity OR over all affinities
>  requested
> 
> Signed-off-by: Jason Wang <jasowang@redhat.com>

Hmm, I don't see any benefit from this patch, I need to use
irq_set_affinity (which however is not exported) to actually bind IRQs
to CPUs.  Example:

with irq_set_affinity_hint:
 43:   89  107  100   97   PCI-MSI-edge   virtio0-request
 44:  178  195  268  199   PCI-MSI-edge   virtio0-request
 45:   97  100   97  155   PCI-MSI-edge   virtio0-request
 46:  234  261  213  218   PCI-MSI-edge   virtio0-request

with irq_set_affinity:
 43:  721    0    0    1   PCI-MSI-edge   virtio0-request
 44:    0  746    0    1   PCI-MSI-edge   virtio0-request
 45:    0    0  658    0   PCI-MSI-edge   virtio0-request
 46:    0    0    1  547   PCI-MSI-edge   virtio0-request

I gathered these quickly after boot, but real benchmarks show the same
behavior, and performance gets actually worse with virtio-scsi
multiqueue+irq_set_affinity_hint than with irq_set_affinity.

I also tried adding IRQ_NO_BALANCING, but the only effect is that I
cannot set the affinity

The queue steering algorithm I use in virtio-scsi is extremely simple
and based on your tx code.  See how my nice pinning is destroyed:

# taskset -c 0 dd if=/dev/sda bs=1M count=1000 of=/dev/null iflag=direct
# cat /proc/interrupts
 43:  2690 2709 2691 2696   PCI-MSI-edge      virtio0-request
 44:   109  122  199  124   PCI-MSI-edge      virtio0-request
 45:   170  183  170  237   PCI-MSI-edge      virtio0-request
 46:   143  166  125  125   PCI-MSI-edge      virtio0-request

All my requests come from CPU#0 and thus go to the first virtqueue, but
the interrupts are serviced all over the place.

Did you set the affinity manually in your experiments, or perhaps there
is a difference between scsi and networking... (interrupt mitigation?)

Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Michael S. Tsirkin July 29, 2012, 8:40 p.m. UTC | #2
On Fri, Jul 27, 2012 at 04:38:11PM +0200, Paolo Bonzini wrote:
> Il 05/07/2012 12:29, Jason Wang ha scritto:
> > Sometimes, virtio device need to configure irq affiniry hint to maximize the
> > performance. Instead of just exposing the irq of a virtqueue, this patch
> > introduce an API to set the affinity for a virtqueue.
> > 
> > The api is best-effort, the affinity hint may not be set as expected due to
> > platform support, irq sharing or irq type. Currently, only pci method were
> > implemented and we set the affinity according to:
> > 
> > - if device uses INTX, we just ignore the request
> > - if device has per vq vector, we force the affinity hint
> > - if the virtqueues share MSI, make the affinity OR over all affinities
> >  requested
> > 
> > Signed-off-by: Jason Wang <jasowang@redhat.com>
> 
> Hmm, I don't see any benefit from this patch, I need to use
> irq_set_affinity (which however is not exported) to actually bind IRQs
> to CPUs.  Example:
> 
> with irq_set_affinity_hint:
>  43:   89  107  100   97   PCI-MSI-edge   virtio0-request
>  44:  178  195  268  199   PCI-MSI-edge   virtio0-request
>  45:   97  100   97  155   PCI-MSI-edge   virtio0-request
>  46:  234  261  213  218   PCI-MSI-edge   virtio0-request
> 
> with irq_set_affinity:
>  43:  721    0    0    1   PCI-MSI-edge   virtio0-request
>  44:    0  746    0    1   PCI-MSI-edge   virtio0-request
>  45:    0    0  658    0   PCI-MSI-edge   virtio0-request
>  46:    0    0    1  547   PCI-MSI-edge   virtio0-request
> 
> I gathered these quickly after boot, but real benchmarks show the same
> behavior, and performance gets actually worse with virtio-scsi
> multiqueue+irq_set_affinity_hint than with irq_set_affinity.
> 
> I also tried adding IRQ_NO_BALANCING, but the only effect is that I
> cannot set the affinity
> 
> The queue steering algorithm I use in virtio-scsi is extremely simple
> and based on your tx code.  See how my nice pinning is destroyed:
> 
> # taskset -c 0 dd if=/dev/sda bs=1M count=1000 of=/dev/null iflag=direct
> # cat /proc/interrupts
>  43:  2690 2709 2691 2696   PCI-MSI-edge      virtio0-request
>  44:   109  122  199  124   PCI-MSI-edge      virtio0-request
>  45:   170  183  170  237   PCI-MSI-edge      virtio0-request
>  46:   143  166  125  125   PCI-MSI-edge      virtio0-request
> 
> All my requests come from CPU#0 and thus go to the first virtqueue, but
> the interrupts are serviced all over the place.
> 
> Did you set the affinity manually in your experiments, or perhaps there
> is a difference between scsi and networking... (interrupt mitigation?)
> 
> Paolo


You need to run irqbalancer in guest to make it actually work. Do you?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Paolo Bonzini July 30, 2012, 6:27 a.m. UTC | #3
Il 29/07/2012 22:40, Michael S. Tsirkin ha scritto:
>> > Did you set the affinity manually in your experiments, or perhaps there
>> > is a difference between scsi and networking... (interrupt mitigation?)
> 
> You need to run irqbalancer in guest to make it actually work. Do you?

Yes, of course, now on to debugging that one.  I just wanted to ask
before the weekend if I was missing something as obvious as that.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Paolo Bonzini Aug. 9, 2012, 3:13 p.m. UTC | #4
Il 05/07/2012 12:29, Jason Wang ha scritto:
> Sometimes, virtio device need to configure irq affiniry hint to maximize the
> performance. Instead of just exposing the irq of a virtqueue, this patch
> introduce an API to set the affinity for a virtqueue.
> 
> The api is best-effort, the affinity hint may not be set as expected due to
> platform support, irq sharing or irq type. Currently, only pci method were
> implemented and we set the affinity according to:
> 
> - if device uses INTX, we just ignore the request
> - if device has per vq vector, we force the affinity hint
> - if the virtqueues share MSI, make the affinity OR over all affinities
>  requested
> 
> Signed-off-by: Jason Wang <jasowang@redhat.com>

It looks like both I and Jason will need these patches during the 3.7
merge window, and from different trees (net-next vs. scsi).  How do we
synchronize?

Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Paolo Bonzini Aug. 9, 2012, 3:14 p.m. UTC | #5
Il 30/07/2012 08:27, Paolo Bonzini ha scritto:
>>>> >> > Did you set the affinity manually in your experiments, or perhaps there
>>>> >> > is a difference between scsi and networking... (interrupt mitigation?)
>> > 
>> > You need to run irqbalancer in guest to make it actually work. Do you?
> Yes, of course, now on to debugging that one.  I just wanted to ask
> before the weekend if I was missing something as obvious as that.

It was indeed an irqbalance bug, it is fixed now upstream.

Paolo

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Avi Kivity Aug. 9, 2012, 3:35 p.m. UTC | #6
On 08/09/2012 06:13 PM, Paolo Bonzini wrote:
> Il 05/07/2012 12:29, Jason Wang ha scritto:
>> Sometimes, virtio device need to configure irq affiniry hint to maximize the
>> performance. Instead of just exposing the irq of a virtqueue, this patch
>> introduce an API to set the affinity for a virtqueue.
>> 
>> The api is best-effort, the affinity hint may not be set as expected due to
>> platform support, irq sharing or irq type. Currently, only pci method were
>> implemented and we set the affinity according to:
>> 
>> - if device uses INTX, we just ignore the request
>> - if device has per vq vector, we force the affinity hint
>> - if the virtqueues share MSI, make the affinity OR over all affinities
>>  requested
>> 
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
> 
> It looks like both I and Jason will need these patches during the 3.7
> merge window, and from different trees (net-next vs. scsi).  How do we
> synchronize?

Get one of them to promise not to rebase, merge it, and base your
patches on top of the merge.

Patch
diff mbox series

diff --git a/drivers/virtio/virtio_pci.c b/drivers/virtio/virtio_pci.c
index adb24f2..2ff0451 100644
--- a/drivers/virtio/virtio_pci.c
+++ b/drivers/virtio/virtio_pci.c
@@ -48,6 +48,7 @@  struct virtio_pci_device
 	int msix_enabled;
 	int intx_enabled;
 	struct msix_entry *msix_entries;
+	cpumask_var_t *msix_affinity_masks;
 	/* Name strings for interrupts. This size should be enough,
 	 * and I'm too lazy to allocate each name separately. */
 	char (*msix_names)[256];
@@ -276,6 +277,10 @@  static void vp_free_vectors(struct virtio_device *vdev)
 	for (i = 0; i < vp_dev->msix_used_vectors; ++i)
 		free_irq(vp_dev->msix_entries[i].vector, vp_dev);
 
+	for (i = 0; i < vp_dev->msix_vectors; i++)
+		if (vp_dev->msix_affinity_masks[i])
+			free_cpumask_var(vp_dev->msix_affinity_masks[i]);
+
 	if (vp_dev->msix_enabled) {
 		/* Disable the vector used for configuration */
 		iowrite16(VIRTIO_MSI_NO_VECTOR,
@@ -293,6 +298,8 @@  static void vp_free_vectors(struct virtio_device *vdev)
 	vp_dev->msix_names = NULL;
 	kfree(vp_dev->msix_entries);
 	vp_dev->msix_entries = NULL;
+	kfree(vp_dev->msix_affinity_masks);
+	vp_dev->msix_affinity_masks = NULL;
 }
 
 static int vp_request_msix_vectors(struct virtio_device *vdev, int nvectors,
@@ -311,6 +318,15 @@  static int vp_request_msix_vectors(struct virtio_device *vdev, int nvectors,
 				     GFP_KERNEL);
 	if (!vp_dev->msix_names)
 		goto error;
+	vp_dev->msix_affinity_masks
+		= kzalloc(nvectors * sizeof *vp_dev->msix_affinity_masks,
+			  GFP_KERNEL);
+	if (!vp_dev->msix_affinity_masks)
+		goto error;
+	for (i = 0; i < nvectors; ++i)
+		if (!alloc_cpumask_var(&vp_dev->msix_affinity_masks[i],
+					GFP_KERNEL))
+			goto error;
 
 	for (i = 0; i < nvectors; ++i)
 		vp_dev->msix_entries[i].entry = i;
@@ -607,6 +623,35 @@  static const char *vp_bus_name(struct virtio_device *vdev)
 	return pci_name(vp_dev->pci_dev);
 }
 
+/* Setup the affinity for a virtqueue:
+ * - force the affinity for per vq vector
+ * - OR over all affinities for shared MSI
+ * - ignore the affinity request if we're using INTX
+ */
+static int vp_set_vq_affinity(struct virtqueue *vq, int cpu)
+{
+	struct virtio_device *vdev = vq->vdev;
+	struct virtio_pci_device *vp_dev = to_vp_device(vdev);
+	struct virtio_pci_vq_info *info = vq->priv;
+	struct cpumask *mask;
+	unsigned int irq;
+
+	if (!vq->callback)
+		return -EINVAL;
+
+	if (vp_dev->msix_enabled) {
+		mask = vp_dev->msix_affinity_masks[info->msix_vector];
+		irq = vp_dev->msix_entries[info->msix_vector].vector;
+		if (cpu == -1)
+			irq_set_affinity_hint(irq, NULL);
+		else {
+			cpumask_set_cpu(cpu, mask);
+			irq_set_affinity_hint(irq, mask);
+		}
+	}
+	return 0;
+}
+
 static struct virtio_config_ops virtio_pci_config_ops = {
 	.get		= vp_get,
 	.set		= vp_set,
@@ -618,6 +663,7 @@  static struct virtio_config_ops virtio_pci_config_ops = {
 	.get_features	= vp_get_features,
 	.finalize_features = vp_finalize_features,
 	.bus_name	= vp_bus_name,
+	.set_vq_affinity = vp_set_vq_affinity,
 };
 
 static void virtio_pci_release_dev(struct device *_d)
diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h
index fc457f4..2c4a989 100644
--- a/include/linux/virtio_config.h
+++ b/include/linux/virtio_config.h
@@ -98,6 +98,7 @@ 
  *	vdev: the virtio_device
  *      This returns a pointer to the bus name a la pci_name from which
  *      the caller can then copy.
+ * @set_vq_affinity: set the affinity for a virtqueue.
  */
 typedef void vq_callback_t(struct virtqueue *);
 struct virtio_config_ops {
@@ -116,6 +117,7 @@  struct virtio_config_ops {
 	u32 (*get_features)(struct virtio_device *vdev);
 	void (*finalize_features)(struct virtio_device *vdev);
 	const char *(*bus_name)(struct virtio_device *vdev);
+	int (*set_vq_affinity)(struct virtqueue *vq, int cpu);
 };
 
 /* If driver didn't advertise the feature, it will never appear. */
@@ -190,5 +192,24 @@  const char *virtio_bus_name(struct virtio_device *vdev)
 	return vdev->config->bus_name(vdev);
 }
 
+/**
+ * virtqueue_set_affinity - setting affinity for a virtqueue
+ * @vq: the virtqueue
+ * @cpu: the cpu no.
+ *
+ * Pay attention the function are best-effort: the affinity hint may not be set
+ * due to config support, irq type and sharing.
+ *
+ */
+static inline
+int virtqueue_set_affinity(struct virtqueue *vq, int cpu)
+{
+	struct virtio_device *vdev = vq->vdev;
+	if (vdev->config->set_vq_affinity)
+		return vdev->config->set_vq_affinity(vq, cpu);
+	return 0;
+}
+
+
 #endif /* __KERNEL__ */
 #endif /* _LINUX_VIRTIO_CONFIG_H */