All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: Wei Wang <wei.w.wang@intel.com>,
	stefanha@gmail.com, marcandre.lureau@gmail.com, mst@redhat.com,
	pbonzini@redhat.com, virtio-dev@lists.oasis-open.org,
	qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
Date: Fri, 19 May 2017 17:53:07 +0800	[thread overview]
Message-ID: <4aa88819-7d82-5172-0ccf-41211b416082@redhat.com> (raw)
In-Reply-To: <591EB435.4080109@intel.com>



On 2017年05月19日 17:00, Wei Wang wrote:
> On 05/19/2017 11:10 AM, Jason Wang wrote:
>>
>>
>> On 2017年05月18日 11:03, Wei Wang wrote:
>>> On 05/17/2017 02:22 PM, Jason Wang wrote:
>>>>
>>>>
>>>> On 2017年05月17日 14:16, Jason Wang wrote:
>>>>>
>>>>>
>>>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>>>
>>>>>>>
>>>>>>> Hi:
>>>>>>>
>>>>>>> Care to post the driver codes too?
>>>>>>>
>>>>>> OK. It may take some time to clean up the driver code before post 
>>>>>> it out. You can first
>>>>>> have a check of the draft at the repo here:
>>>>>> https://github.com/wei-w-wang/vhost-pci-driver
>>>>>>
>>>>>> Best,
>>>>>> Wei
>>>>>
>>>>> Interesting, looks like there's one copy on tx side. We used to 
>>>>> have zerocopy support for tun for VM2VM traffic. Could you please 
>>>>> try to compare it with your vhost-pci-net by:
>>>>>
>>> We can analyze from the whole data path - from VM1's network stack 
>>> to send packets -> VM2's
>>> network stack to receive packets. The number of copies are actually 
>>> the same for both.
>>
>> That's why I'm asking you to compare the performance. The only reason 
>> for vhost-pci is performance. You should prove it.
>>
>>>
>>> vhost-pci: 1-copy happen in VM1's driver xmit(), which copes packets 
>>> from its network stack to VM2's
>>> RX ring buffer. (we call it "zerocopy" because there is no 
>>> intermediate copy between VMs)
>>> zerocopy enabled vhost-net: 1-copy happen in tun's recvmsg, which 
>>> copies packets from VM1's TX ring
>>> buffer to VM2's RX ring buffer.
>>
>> Actually, there's a major difference here. You do copy in guest which 
>> consumes time slice of vcpu thread on host. Vhost_net do this in its 
>> own thread. So I feel vhost_net is even faster here, maybe I was wrong.
>>
>
> The code path using vhost_net is much longer - the Ping test shows 
> that the zcopy based vhost_net reports around 0.237ms,
> while using vhost-pci it reports around 0.06 ms.
> For some environment issue, I can report the throughput number later.

Yes, vhost-pci should have better latency by design. But we should 
measure pps or packet size other than 64 as well. I agree vhost_net has 
bad latency, but this does not mean it could not be improved (just 
because few people are working on improve this in the past), especially 
we know the destination is another VM.

>
>>>
>>> That being said, we compared to vhost-user, instead of vhost_net, 
>>> because vhost-user is the one
>>> that is used in NFV, which we think is a major use case for vhost-pci.
>>
>> If this is true, why not draft a pmd driver instead of a kernel one? 
>
> Yes, that's right. There are actually two directions of the vhost-pci 
> driver implementation - kernel driver
> and dpdk pmd. The QEMU side device patches are first posted out for 
> discussion, because when the device
> part is ready, we will be able to have the related team work on the 
> pmd driver as well. As usual, the pmd
> driver would give a much better throughput.

I think pmd should be easier for a prototype than kernel driver.

>
> So, I think at this stage we should focus on the device part review, 
> and use the kernel driver to prove that
> the device part design and implementation is reasonable and functional.
>

Probably both.

>
>> And do you use virtio-net kernel driver to compare the performance? 
>> If yes, has OVS dpdk optimized for kernel driver (I think not)?
>>
>
> We used the legacy OVS+DPDK.
> Another thing with the existing OVS+DPDK usage is its centralization 
> property. With vhost-pci, we will be able to
> de-centralize the usage.
>

Right, so I think we should prove:

- For usage, prove or make vhost-pci better than existed share memory 
based solution. (Or is virtio good at shared memory?)
- For performance, prove or make vhost-pci better than existed 
centralized solution.

>> What's more important, if vhost-pci is faster, I think its kernel 
>> driver should be also faster than virtio-net, no?
>
> Sorry about the confusion. We are actually not trying to use vhost-pci 
> to replace virtio-net. Rather, vhost-pci
> can be viewed as another type of backend for virtio-net to be used in 
> NFV (the communication channel is
> vhost-pci-net<->virtio_net).

My point is performance number is important for proving the correctness 
for both design and engineering. If its slow, it has less interesting in 
NFV.

Thanks

>
>
> Best,
> Wei

  reply	other threads:[~2017-05-19  9:53 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-12  8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 01/16] vhost-user: share the vhost-user protocol related structures Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 02/16] vl: add the vhost-pci-slave command line option Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 03/16] vhost-pci-slave: create a vhost-user slave to support vhost-pci Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 04/16] vhost-pci-net: add vhost-pci-net Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 05/16] vhost-pci-net-pci: add vhost-pci-net-pci Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 06/16] virtio: add inter-vm notification support Wei Wang
2017-05-15  0:21   ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 07/16] vhost-user: send device id to the slave Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 08/16] vhost-user: send guest physical address of virtqueues " Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 09/16] vhost-user: send VHOST_USER_SET_VHOST_PCI_START/STOP Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 10/16] vhost-pci-net: send the negotiated feature bits to the master Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 11/16] vhost-user: add asynchronous read for the vhost-user master Wei Wang
2017-05-12  8:51   ` Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 12/16] vhost-user: handling VHOST_USER_SET_FEATURES Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 13/16] vhost-pci-slave: add "reset_virtio" Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 14/16] vhost-pci-slave: add support to delete a vhost-pci device Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 15/16] vhost-pci-net: tell the driver that it is ready to send packets Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 16/16] vl: enable vhost-pci-slave Wei Wang
2017-05-12  9:30 ` [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication no-reply
2017-05-16 15:21   ` Michael S. Tsirkin
2017-05-16  6:46 ` Jason Wang
2017-05-16  7:12   ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-05-17  6:16     ` Jason Wang
2017-05-17  6:22       ` Jason Wang
2017-05-18  3:03         ` Wei Wang
2017-05-19  3:10           ` [Qemu-devel] [virtio-dev] " Jason Wang
2017-05-19  9:00             ` Wei Wang
2017-05-19  9:53               ` Jason Wang [this message]
2017-05-19 20:44               ` Michael S. Tsirkin
2017-05-23 11:09                 ` Wei Wang
2017-05-23 15:15                   ` Michael S. Tsirkin
2017-05-19 15:33             ` Stefan Hajnoczi
2017-05-22  2:27               ` Jason Wang
2017-05-22 11:46                 ` Wang, Wei W
2017-05-23  2:08                   ` Jason Wang
2017-05-23  5:47                     ` Wei Wang
2017-05-23  6:32                       ` Jason Wang
2017-05-23 10:48                         ` Wei Wang
2017-05-24  3:24                           ` Jason Wang
2017-05-24  8:31                             ` Wei Wang
2017-05-25  7:59                               ` Jason Wang
2017-05-25 12:01                                 ` Wei Wang
2017-05-25 12:22                                   ` Jason Wang
2017-05-25 12:31                                     ` [Qemu-devel] [virtio-dev] " Jason Wang
2017-05-25 17:57                                       ` Michael S. Tsirkin
2017-06-04 10:34                                         ` Wei Wang
2017-06-05  2:21                                           ` Michael S. Tsirkin
2017-05-25 14:35                                     ` [Qemu-devel] " Eric Blake
2017-05-26  4:26                                       ` Jason Wang
2017-05-19 16:49             ` Michael S. Tsirkin
2017-05-22  2:22               ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4aa88819-7d82-5172-0ccf-41211b416082@redhat.com \
    --to=jasowang@redhat.com \
    --cc=marcandre.lureau@gmail.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@gmail.com \
    --cc=virtio-dev@lists.oasis-open.org \
    --cc=wei.w.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.