All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Jason Wang <jasowang@redhat.com>,
	mst@redhat.com, "Nicholas A. Bellinger" <nab@linux-iscsi.org>
Cc: mashirle@us.ibm.com, krkumar2@in.ibm.com,
	habanero@linux.vnet.ibm.com, rusty@rustcorp.com.au,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, edumazet@google.com,
	tahm@linux.vnet.ibm.com, jwhan@filewood.snu.ac.kr,
	davem@davemloft.net, kvm@vger.kernel.org, sri@us.ibm.com
Subject: Re: [net-next RFC V5 3/5] virtio: intorduce an API to set affinity for a virtqueue
Date: Fri, 27 Jul 2012 16:38:11 +0200	[thread overview]
Message-ID: <5012A7D3.4040800@redhat.com> (raw)
In-Reply-To: <1341484194-8108-4-git-send-email-jasowang@redhat.com>

Il 05/07/2012 12:29, Jason Wang ha scritto:
> Sometimes, virtio device need to configure irq affiniry hint to maximize the
> performance. Instead of just exposing the irq of a virtqueue, this patch
> introduce an API to set the affinity for a virtqueue.
> 
> The api is best-effort, the affinity hint may not be set as expected due to
> platform support, irq sharing or irq type. Currently, only pci method were
> implemented and we set the affinity according to:
> 
> - if device uses INTX, we just ignore the request
> - if device has per vq vector, we force the affinity hint
> - if the virtqueues share MSI, make the affinity OR over all affinities
>  requested
> 
> Signed-off-by: Jason Wang <jasowang@redhat.com>

Hmm, I don't see any benefit from this patch, I need to use
irq_set_affinity (which however is not exported) to actually bind IRQs
to CPUs.  Example:

with irq_set_affinity_hint:
 43:   89  107  100   97   PCI-MSI-edge   virtio0-request
 44:  178  195  268  199   PCI-MSI-edge   virtio0-request
 45:   97  100   97  155   PCI-MSI-edge   virtio0-request
 46:  234  261  213  218   PCI-MSI-edge   virtio0-request

with irq_set_affinity:
 43:  721    0    0    1   PCI-MSI-edge   virtio0-request
 44:    0  746    0    1   PCI-MSI-edge   virtio0-request
 45:    0    0  658    0   PCI-MSI-edge   virtio0-request
 46:    0    0    1  547   PCI-MSI-edge   virtio0-request

I gathered these quickly after boot, but real benchmarks show the same
behavior, and performance gets actually worse with virtio-scsi
multiqueue+irq_set_affinity_hint than with irq_set_affinity.

I also tried adding IRQ_NO_BALANCING, but the only effect is that I
cannot set the affinity

The queue steering algorithm I use in virtio-scsi is extremely simple
and based on your tx code.  See how my nice pinning is destroyed:

# taskset -c 0 dd if=/dev/sda bs=1M count=1000 of=/dev/null iflag=direct
# cat /proc/interrupts
 43:  2690 2709 2691 2696   PCI-MSI-edge      virtio0-request
 44:   109  122  199  124   PCI-MSI-edge      virtio0-request
 45:   170  183  170  237   PCI-MSI-edge      virtio0-request
 46:   143  166  125  125   PCI-MSI-edge      virtio0-request

All my requests come from CPU#0 and thus go to the first virtqueue, but
the interrupts are serviced all over the place.

Did you set the affinity manually in your experiments, or perhaps there
is a difference between scsi and networking... (interrupt mitigation?)

Paolo

WARNING: multiple messages have this Message-ID (diff)
From: Paolo Bonzini <pbonzini@redhat.com>
To: Jason Wang <jasowang@redhat.com>,
	mst@redhat.com, "Nicholas A. Bellinger" <nab@linux-iscsi.org>
Cc: krkumar2@in.ibm.com, habanero@linux.vnet.ibm.com,
	mashirle@us.ibm.com, kvm@vger.kernel.org, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, edumazet@google.com,
	tahm@linux.vnet.ibm.com, jwhan@filewood.snu.ac.kr,
	davem@davemloft.net, sri@us.ibm.com
Subject: Re: [net-next RFC V5 3/5] virtio: intorduce an API to set affinity for a virtqueue
Date: Fri, 27 Jul 2012 16:38:11 +0200	[thread overview]
Message-ID: <5012A7D3.4040800@redhat.com> (raw)
In-Reply-To: <1341484194-8108-4-git-send-email-jasowang@redhat.com>

Il 05/07/2012 12:29, Jason Wang ha scritto:
> Sometimes, virtio device need to configure irq affiniry hint to maximize the
> performance. Instead of just exposing the irq of a virtqueue, this patch
> introduce an API to set the affinity for a virtqueue.
> 
> The api is best-effort, the affinity hint may not be set as expected due to
> platform support, irq sharing or irq type. Currently, only pci method were
> implemented and we set the affinity according to:
> 
> - if device uses INTX, we just ignore the request
> - if device has per vq vector, we force the affinity hint
> - if the virtqueues share MSI, make the affinity OR over all affinities
>  requested
> 
> Signed-off-by: Jason Wang <jasowang@redhat.com>

Hmm, I don't see any benefit from this patch, I need to use
irq_set_affinity (which however is not exported) to actually bind IRQs
to CPUs.  Example:

with irq_set_affinity_hint:
 43:   89  107  100   97   PCI-MSI-edge   virtio0-request
 44:  178  195  268  199   PCI-MSI-edge   virtio0-request
 45:   97  100   97  155   PCI-MSI-edge   virtio0-request
 46:  234  261  213  218   PCI-MSI-edge   virtio0-request

with irq_set_affinity:
 43:  721    0    0    1   PCI-MSI-edge   virtio0-request
 44:    0  746    0    1   PCI-MSI-edge   virtio0-request
 45:    0    0  658    0   PCI-MSI-edge   virtio0-request
 46:    0    0    1  547   PCI-MSI-edge   virtio0-request

I gathered these quickly after boot, but real benchmarks show the same
behavior, and performance gets actually worse with virtio-scsi
multiqueue+irq_set_affinity_hint than with irq_set_affinity.

I also tried adding IRQ_NO_BALANCING, but the only effect is that I
cannot set the affinity

The queue steering algorithm I use in virtio-scsi is extremely simple
and based on your tx code.  See how my nice pinning is destroyed:

# taskset -c 0 dd if=/dev/sda bs=1M count=1000 of=/dev/null iflag=direct
# cat /proc/interrupts
 43:  2690 2709 2691 2696   PCI-MSI-edge      virtio0-request
 44:   109  122  199  124   PCI-MSI-edge      virtio0-request
 45:   170  183  170  237   PCI-MSI-edge      virtio0-request
 46:   143  166  125  125   PCI-MSI-edge      virtio0-request

All my requests come from CPU#0 and thus go to the first virtqueue, but
the interrupts are serviced all over the place.

Did you set the affinity manually in your experiments, or perhaps there
is a difference between scsi and networking... (interrupt mitigation?)

Paolo

  reply	other threads:[~2012-07-27 14:38 UTC|newest]

Thread overview: 91+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-07-05 10:29 [net-next RFC V5 0/5] Multiqueue virtio-net Jason Wang
2012-07-05 10:29 ` Jason Wang
2012-07-05 10:29 ` [net-next RFC V5 1/5] virtio_net: Introduce VIRTIO_NET_F_MULTIQUEUE Jason Wang
2012-07-05 10:29   ` Jason Wang
2012-07-05 10:29 ` [net-next RFC V5 2/5] virtio_ring: move queue_index to vring_virtqueue Jason Wang
2012-07-05 10:29   ` Jason Wang
2012-07-05 11:40   ` Sasha Levin
2012-07-05 11:40     ` Sasha Levin
2012-07-06  3:17     ` Jason Wang
2012-07-06  3:17       ` Jason Wang
2012-07-26  8:20     ` Paolo Bonzini
2012-07-26  8:20       ` Paolo Bonzini
2012-07-30  3:30       ` Jason Wang
2012-07-30  3:30         ` Jason Wang
2012-07-05 10:29 ` [net-next RFC V5 3/5] virtio: intorduce an API to set affinity for a virtqueue Jason Wang
2012-07-05 10:29   ` Jason Wang
2012-07-27 14:38   ` Paolo Bonzini [this message]
2012-07-27 14:38     ` Paolo Bonzini
2012-07-29 20:40     ` Michael S. Tsirkin
2012-07-29 20:40       ` Michael S. Tsirkin
2012-07-30  6:27       ` Paolo Bonzini
2012-08-09 15:14         ` Paolo Bonzini
2012-08-09 15:14           ` Paolo Bonzini
2012-08-09 15:13   ` Paolo Bonzini
2012-08-09 15:13     ` Paolo Bonzini
2012-08-09 15:35     ` Avi Kivity
2012-08-09 15:35       ` Avi Kivity
2012-07-05 10:29 ` [net-next RFC V5 4/5] virtio_net: multiqueue support Jason Wang
2012-07-05 10:29   ` Jason Wang
2012-07-05 20:02   ` Amos Kong
2012-07-05 20:02     ` Amos Kong
2012-07-06  7:45     ` Jason Wang
2012-07-06  7:45       ` Jason Wang
2012-07-20 13:40   ` Michael S. Tsirkin
2012-07-20 13:40     ` Michael S. Tsirkin
2012-07-21 12:02     ` Sasha Levin
2012-07-21 12:02       ` Sasha Levin
2012-07-23  5:54       ` Jason Wang
2012-07-23  5:54         ` Jason Wang
2012-07-23  9:28         ` Sasha Levin
2012-07-23  9:28           ` Sasha Levin
2012-07-30  3:29           ` Jason Wang
2012-07-30  3:29             ` Jason Wang
2012-07-29  9:44       ` Michael S. Tsirkin
2012-07-29  9:44         ` Michael S. Tsirkin
2012-07-30  3:26         ` Jason Wang
2012-07-30  3:26           ` Jason Wang
2012-07-30 13:00         ` Sasha Levin
2012-07-30 13:00           ` Sasha Levin
2012-07-23  5:48     ` Jason Wang
2012-07-23  5:48       ` Jason Wang
2012-07-29  9:50       ` Michael S. Tsirkin
2012-07-29  9:50         ` Michael S. Tsirkin
2012-07-30  5:15         ` Jason Wang
2012-07-30  5:15           ` Jason Wang
2012-07-05 10:29 ` [net-next RFC V5 5/5] virtio_net: support negotiating the number of queues through ctrl vq Jason Wang
2012-07-05 10:29   ` Jason Wang
2012-07-05 12:51   ` Sasha Levin
2012-07-05 12:51     ` Sasha Levin
2012-07-05 20:07     ` Amos Kong
2012-07-05 20:07       ` Amos Kong
2012-07-06  7:46       ` Jason Wang
2012-07-06  7:46         ` Jason Wang
2012-07-06  3:20     ` Jason Wang
2012-07-06  3:20       ` Jason Wang
2012-07-06  6:38       ` Stephen Hemminger
2012-07-06  6:38         ` Stephen Hemminger
2012-07-06  9:26         ` Jason Wang
2012-07-06  9:26           ` Jason Wang
2012-07-06  8:10       ` Sasha Levin
2012-07-06  8:10         ` Sasha Levin
2012-07-09 20:13   ` Ben Hutchings
2012-07-09 20:13     ` Ben Hutchings
2012-07-20 12:33   ` Michael S. Tsirkin
2012-07-20 12:33     ` Michael S. Tsirkin
2012-07-23  5:32     ` Jason Wang
2012-07-23  5:32       ` Jason Wang
2012-07-05 17:45 ` [net-next RFC V5 0/5] Multiqueue virtio-net Rick Jones
2012-07-05 17:45   ` Rick Jones
2012-07-06  7:42   ` Jason Wang
2012-07-06  7:42     ` Jason Wang
2012-07-06 16:23     ` Rick Jones
2012-07-06 16:23       ` Rick Jones
2012-07-09  3:23       ` Jason Wang
2012-07-09  3:23         ` Jason Wang
2012-07-09 16:46         ` Rick Jones
2012-07-09 16:46           ` Rick Jones
2012-07-08  8:19 ` Ronen Hod
2012-07-08  8:19   ` Ronen Hod
2012-07-09  5:35   ` Jason Wang
2012-07-09  5:35     ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5012A7D3.4040800@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=habanero@linux.vnet.ibm.com \
    --cc=jasowang@redhat.com \
    --cc=jwhan@filewood.snu.ac.kr \
    --cc=krkumar2@in.ibm.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mashirle@us.ibm.com \
    --cc=mst@redhat.com \
    --cc=nab@linux-iscsi.org \
    --cc=netdev@vger.kernel.org \
    --cc=rusty@rustcorp.com.au \
    --cc=sri@us.ibm.com \
    --cc=tahm@linux.vnet.ibm.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.