All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: Wei Wang <wei.w.wang@intel.com>, Stefan Hajnoczi <stefanha@gmail.com>
Cc: "virtio-dev@lists.oasis-open.org"
	<virtio-dev@lists.oasis-open.org>,
	"mst@redhat.com" <mst@redhat.com>,
	"marcandre.lureau@gmail.com" <marcandre.lureau@gmail.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"pbonzini@redhat.com" <pbonzini@redhat.com>
Subject: Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
Date: Tue, 23 May 2017 14:32:52 +0800	[thread overview]
Message-ID: <3d9e8375-fbaa-c011-8242-b37cd971069b@redhat.com> (raw)
In-Reply-To: <5923CCF2.2000001@intel.com>



On 2017年05月23日 13:47, Wei Wang wrote:
> On 05/23/2017 10:08 AM, Jason Wang wrote:
>>
>>
>> On 2017年05月22日 19:46, Wang, Wei W wrote:
>>> On Monday, May 22, 2017 10:28 AM, Jason Wang wrote:
>>>> On 2017年05月19日 23:33, Stefan Hajnoczi wrote:
>>>>> On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
>>>>>> On 2017年05月18日 11:03, Wei Wang wrote:
>>>>>>> On 05/17/2017 02:22 PM, Jason Wang wrote:
>>>>>>>> On 2017年05月17日 14:16, Jason Wang wrote:
>>>>>>>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>>>>>> Hi:
>>>>>>>>>>>
>>>>>>>>>>> Care to post the driver codes too?
>>>>>>>>>>>
>>>>>>>>>> OK. It may take some time to clean up the driver code before 
>>>>>>>>>> post
>>>>>>>>>> it out. You can first have a check of the draft at the repo 
>>>>>>>>>> here:
>>>>>>>>>> https://github.com/wei-w-wang/vhost-pci-driver
>>>>>>>>>>
>>>>>>>>>> Best,
>>>>>>>>>> Wei
>>>>>>>>> Interesting, looks like there's one copy on tx side. We used to
>>>>>>>>> have zerocopy support for tun for VM2VM traffic. Could you please
>>>>>>>>> try to compare it with your vhost-pci-net by:
>>>>>>>>>
>>>>>>> We can analyze from the whole data path - from VM1's network stack
>>>>>>> to send packets -> VM2's network stack to receive packets. The
>>>>>>> number of copies are actually the same for both.
>>>>>> That's why I'm asking you to compare the performance. The only 
>>>>>> reason
>>>>>> for vhost-pci is performance. You should prove it.
>>>>> There is another reason for vhost-pci besides maximum performance:
>>>>>
>>>>> vhost-pci makes it possible for end-users to run networking or 
>>>>> storage
>>>>> appliances in compute clouds.  Cloud providers do not allow end-users
>>>>> to run custom vhost-user processes on the host so you need vhost-pci.
>>>>>
>>>>> Stefan
>>>> Then it has non NFV use cases and the question goes back to the 
>>>> performance
>>>> comparing between vhost-pci and zerocopy vhost_net. If it does not 
>>>> perform
>>>> better, it was less interesting at least in this case.
>>>>
>>> Probably I can share what we got about vhost-pci and vhost-user:
>>> https://github.com/wei-w-wang/vhost-pci-discussion/blob/master/vhost_pci_vs_vhost_user.pdf 
>>>
>>> Right now, I don’t have the environment to add the vhost_net test.
>>
>> Thanks, the number looks good. But I have some questions:
>>
>> - Is the number measured through your vhost-pci kernel driver code?
>
> Yes, the kernel driver code.

Interesting, in the above link, "l2fwd" was used in vhost-pci testing. I 
want to know more about the test configuration: If l2fwd is the one that 
dpdk had, want to know how can you make it work for kernel driver. 
(Maybe packet socket I think?) If not, want to know how do you configure 
it (e.g through bridge or act_mirred or others). And in OVS dpdk, is 
dpdk l2fwd + pmd used in the testing?

>
>> - Have you tested packet size other than 64B?
>
> Not yet.

Better to test more since the time spent on 64B copy should be very fast.

>
>> - Is zerocopy supported in OVS-dpdk? If yes, is it enabled in your test?
> zerocopy is not used in the test, but I don't think zerocopy can increase
> the throughput to 2x.

I agree, but we need prove this with numbers.

Thanks

> On the other side, we haven't put effort to optimize
> the draft kernel driver yet.
>
> Best,
> Wei
>

  reply	other threads:[~2017-05-23  6:33 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-12  8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 01/16] vhost-user: share the vhost-user protocol related structures Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 02/16] vl: add the vhost-pci-slave command line option Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 03/16] vhost-pci-slave: create a vhost-user slave to support vhost-pci Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 04/16] vhost-pci-net: add vhost-pci-net Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 05/16] vhost-pci-net-pci: add vhost-pci-net-pci Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 06/16] virtio: add inter-vm notification support Wei Wang
2017-05-15  0:21   ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 07/16] vhost-user: send device id to the slave Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 08/16] vhost-user: send guest physical address of virtqueues " Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 09/16] vhost-user: send VHOST_USER_SET_VHOST_PCI_START/STOP Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 10/16] vhost-pci-net: send the negotiated feature bits to the master Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 11/16] vhost-user: add asynchronous read for the vhost-user master Wei Wang
2017-05-12  8:51   ` Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 12/16] vhost-user: handling VHOST_USER_SET_FEATURES Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 13/16] vhost-pci-slave: add "reset_virtio" Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 14/16] vhost-pci-slave: add support to delete a vhost-pci device Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 15/16] vhost-pci-net: tell the driver that it is ready to send packets Wei Wang
2017-05-12  8:35 ` [Qemu-devel] [PATCH v2 16/16] vl: enable vhost-pci-slave Wei Wang
2017-05-12  9:30 ` [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication no-reply
2017-05-16 15:21   ` Michael S. Tsirkin
2017-05-16  6:46 ` Jason Wang
2017-05-16  7:12   ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-05-17  6:16     ` Jason Wang
2017-05-17  6:22       ` Jason Wang
2017-05-18  3:03         ` Wei Wang
2017-05-19  3:10           ` [Qemu-devel] [virtio-dev] " Jason Wang
2017-05-19  9:00             ` Wei Wang
2017-05-19  9:53               ` Jason Wang
2017-05-19 20:44               ` Michael S. Tsirkin
2017-05-23 11:09                 ` Wei Wang
2017-05-23 15:15                   ` Michael S. Tsirkin
2017-05-19 15:33             ` Stefan Hajnoczi
2017-05-22  2:27               ` Jason Wang
2017-05-22 11:46                 ` Wang, Wei W
2017-05-23  2:08                   ` Jason Wang
2017-05-23  5:47                     ` Wei Wang
2017-05-23  6:32                       ` Jason Wang [this message]
2017-05-23 10:48                         ` Wei Wang
2017-05-24  3:24                           ` Jason Wang
2017-05-24  8:31                             ` Wei Wang
2017-05-25  7:59                               ` Jason Wang
2017-05-25 12:01                                 ` Wei Wang
2017-05-25 12:22                                   ` Jason Wang
2017-05-25 12:31                                     ` [Qemu-devel] [virtio-dev] " Jason Wang
2017-05-25 17:57                                       ` Michael S. Tsirkin
2017-06-04 10:34                                         ` Wei Wang
2017-06-05  2:21                                           ` Michael S. Tsirkin
2017-05-25 14:35                                     ` [Qemu-devel] " Eric Blake
2017-05-26  4:26                                       ` Jason Wang
2017-05-19 16:49             ` Michael S. Tsirkin
2017-05-22  2:22               ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3d9e8375-fbaa-c011-8242-b37cd971069b@redhat.com \
    --to=jasowang@redhat.com \
    --cc=marcandre.lureau@gmail.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@gmail.com \
    --cc=virtio-dev@lists.oasis-open.org \
    --cc=wei.w.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.