All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Cc: netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [PATCH V2 net-next 0/5] xen-net{back, front}: Multiple transmit and receive queues
Date: Fri, 14 Feb 2014 11:50:19 +0000	[thread overview]
Message-ID: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com> (raw)


This patch series implements multiple transmit and receive queues (i.e.
multiple shared rings) for the xen virtual network interfaces.

The series is split up as follows:
 - Patches 1 and 3 factor out the queue-specific data for netback and
    netfront respectively, and modify the rest of the code to use these
    as appropriate.
 - Patches 2 and 4 introduce new XenStore keys to negotiate and use
   multiple shared rings and event channels, and code to connect these
   as appropriate.
 - Patch 5 documents the XenStore keys required for the new feature
   in include/xen/interface/io/netif.h

All other transmit and receive processing remains unchanged, i.e. there
is a kthread per queue and a NAPI context per queue.

The performance of these patches has been analysed in detail, with
results available at:

http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing

To summarise:
  * Using multiple queues allows a VM to transmit at line rate on a 10
    Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
    with a single queue.
  * For intra-host VM--VM traffic, eight queues provide 171% of the
    throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
  * There is a corresponding increase in total CPU usage, i.e. this is a
    scaling out over available resources, not an efficiency improvement.
  * Results depend on the availability of sufficient CPUs, as well as the
    distribution of interrupts and the distribution of TCP streams across
    the queues.

Queue selection is currently achieved via an L4 hash on the packet (i.e.
TCP src/dst port, IP src/dst address) and is not negotiated between the
frontend and backend, since only one option exists. Future patches to
support other frontends (particularly Windows) will need to add some
capability to negotiate not only the hash algorithm selection, but also
allow the frontend to specify some parameters to this.

Queue-specific XenStore entries for ring references and event channels
are stored hierarchically, i.e. under .../queue-N/... where N varies
from 0 to one less than the requested number of queues (inclusive). If
only one queue is requested, it falls back to the flat structure where
the ring references and event channels are written at the same level as
other vif information.

V2:
- Rebase onto net-next
- Change queue->number to queue->id
- Add atomic operations around the small number of stats variables that
  are not queue-specific or per-cpu
- Fixup formatting and style issues
- XenStore protocol changes documented in netif.h
- Default max. number of queues to num_online_cpus()
- Check requested number of queues does not exceed maximum

--
Andrew J. Bennieston

WARNING: multiple messages have this Message-ID (diff)
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [PATCH V2 net-next 0/5] xen-net{back, front}: Multiple transmit and receive queues
Date: Fri, 14 Feb 2014 11:50:19 +0000	[thread overview]
Message-ID: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com> (raw)


This patch series implements multiple transmit and receive queues (i.e.
multiple shared rings) for the xen virtual network interfaces.

The series is split up as follows:
 - Patches 1 and 3 factor out the queue-specific data for netback and
    netfront respectively, and modify the rest of the code to use these
    as appropriate.
 - Patches 2 and 4 introduce new XenStore keys to negotiate and use
   multiple shared rings and event channels, and code to connect these
   as appropriate.
 - Patch 5 documents the XenStore keys required for the new feature
   in include/xen/interface/io/netif.h

All other transmit and receive processing remains unchanged, i.e. there
is a kthread per queue and a NAPI context per queue.

The performance of these patches has been analysed in detail, with
results available at:

http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing

To summarise:
  * Using multiple queues allows a VM to transmit at line rate on a 10
    Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
    with a single queue.
  * For intra-host VM--VM traffic, eight queues provide 171% of the
    throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
  * There is a corresponding increase in total CPU usage, i.e. this is a
    scaling out over available resources, not an efficiency improvement.
  * Results depend on the availability of sufficient CPUs, as well as the
    distribution of interrupts and the distribution of TCP streams across
    the queues.

Queue selection is currently achieved via an L4 hash on the packet (i.e.
TCP src/dst port, IP src/dst address) and is not negotiated between the
frontend and backend, since only one option exists. Future patches to
support other frontends (particularly Windows) will need to add some
capability to negotiate not only the hash algorithm selection, but also
allow the frontend to specify some parameters to this.

Queue-specific XenStore entries for ring references and event channels
are stored hierarchically, i.e. under .../queue-N/... where N varies
from 0 to one less than the requested number of queues (inclusive). If
only one queue is requested, it falls back to the flat structure where
the ring references and event channels are written at the same level as
other vif information.

V2:
- Rebase onto net-next
- Change queue->number to queue->id
- Add atomic operations around the small number of stats variables that
  are not queue-specific or per-cpu
- Fixup formatting and style issues
- XenStore protocol changes documented in netif.h
- Default max. number of queues to num_online_cpus()
- Check requested number of queues does not exceed maximum

--
Andrew J. Bennieston

             reply	other threads:[~2014-02-14 11:50 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-02-14 11:50 Andrew J. Bennieston [this message]
2014-02-14 11:50 ` [PATCH V2 net-next 0/5] xen-net{back, front}: Multiple transmit and receive queues Andrew J. Bennieston
2014-02-14 11:50 ` [PATCH V2 net-next 1/5] xen-netback: Factor queue-specific data into queue struct Andrew J. Bennieston
2014-02-14 11:50   ` Andrew J. Bennieston
2014-02-14 11:50 ` [PATCH V2 net-next 2/5] xen-netback: Add support for multiple queues Andrew J. Bennieston
2014-02-14 11:50   ` Andrew J. Bennieston
2014-02-14 14:11   ` Wei Liu
2014-02-14 14:11   ` Wei Liu
2014-02-14 14:57     ` Andrew Bennieston
2014-02-14 14:57     ` Andrew Bennieston
2014-02-14 15:36       ` Wei Liu
2014-02-14 15:36       ` Wei Liu
2014-02-14 15:42         ` Andrew Bennieston
2014-02-14 15:42         ` Andrew Bennieston
2014-02-14 11:50 ` [PATCH V2 net-next 3/5] xen-netfront: Factor queue-specific data into queue struct Andrew J. Bennieston
2014-02-14 11:50   ` Andrew J. Bennieston
2014-02-14 11:50 ` [PATCH V2 net-next 4/5] xen-netfront: Add support for multiple queues Andrew J. Bennieston
2014-02-14 11:50   ` Andrew J. Bennieston
2014-02-14 14:13   ` Wei Liu
2014-02-14 14:13   ` Wei Liu
2014-02-14 14:58     ` Andrew Bennieston
2014-02-14 14:58     ` Andrew Bennieston
2014-02-14 11:50 ` [PATCH V2 net-next 5/5] xen-net{back, front}: Document multi-queue feature in netif.h Andrew J. Bennieston
2014-02-14 11:50   ` Andrew J. Bennieston
2014-02-14 14:06 ` [PATCH V2 net-next 0/5] xen-net{back,front}: Multiple transmit and receive queues Wei Liu
2014-02-14 14:53   ` [PATCH V2 net-next 0/5] xen-net{back, front}: " Andrew Bennieston
2014-02-14 14:53   ` [PATCH V2 net-next 0/5] xen-net{back,front}: " Andrew Bennieston
2014-02-14 15:25     ` Wei Liu
2014-02-14 15:40       ` [PATCH V2 net-next 0/5] xen-net{back, front}: " Andrew Bennieston
2014-02-14 15:40       ` [PATCH V2 net-next 0/5] xen-net{back,front}: " Andrew Bennieston
2014-02-14 15:52         ` [PATCH V2 net-next 0/5] xen-net{back, front}: " Wei Liu
2014-02-14 15:52         ` [PATCH V2 net-next 0/5] xen-net{back,front}: " Wei Liu
2014-02-14 15:25     ` [PATCH V2 net-next 0/5] xen-net{back, front}: " Wei Liu
2014-02-14 14:06 ` Wei Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com \
    --to=andrew.bennieston@citrix.com \
    --cc=ian.campbell@citrix.com \
    --cc=netdev@vger.kernel.org \
    --cc=paul.durrant@citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.