linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Haotian Wang <haotian.wang@sifive.com>
To: kishon@ti.com, lorenzo.pieralisi@arm.com, bhelgaas@google.com,
	mst@redhat.com
Cc: linux-pci@vger.kernel.org, haotian.wang@duke.edu
Subject: Re: [PATCH] pci: endpoint: functions: Add a virtnet EP function
Date: Tue,  3 Sep 2019 20:55:22 -0400	[thread overview]
Message-ID: <20190904005522.2190-1-haotian.wang@sifive.com> (raw)
In-Reply-To: <7067e657-5c8e-b724-fa6a-086fece6e6c3@redhat.com>

On Tue, Sep 3, 2019 at 6:42 AM Jason Wang <jasowang@redhat.com> wrote:
> So if I understand correctly, what you want is:
> 
> 1) epf virtio actually represent a full virtio pci device to the host 
> Linux.
> 2) to endpoint Linux, you also want to represent a virtio device (by 
> copying data between two vrings) that has its own config ops
> 
> This looks feasible but tricky. One part is the feature negotiation. You 
> probably need to prepare two set of features for each side. Consider in 
> your case, you claim the device to support GUEST_CSUM, but since no 
> HOST_CUSM is advertised, neither side will send packet without csum. And 
> if you claim HOST_CUSM, you need to deal with the case if one of side 
> does not support GUEST_CSUM (e.g checksum by yourself). And things will 
> be even more complex for other offloading features. Another part is the 
> configuration space. You need to handle the inconsistency between two 
> sides, e.g one side want 4 queues but the other only do 1.

You are right about the two bullet points. You are also right about the
two sets of features.

When I put GUEST_CSUM and HOST_CSUM in both devices' features, I always
got the error that packets had incorrect "total length" in ip headers.
There were a bunch of other problems when I tried to implement the other
kinds of offloading.

Also, I encountered another inconsistency with the virtio 1.1 spec.
According to the spec, when legacy interface was used, we were supposed
to put the virtio_net_hdr and the actual packet in two different
descriptors in the rx queue. After a lot of trial and error, packets
were supposed to be put directly after the virtio_net_hdr struct,
together in the same descriptor.

Given that, I still did not address the situations where the two sides
had different features. Therefore, the solution right now is to hardcode
the features the epf support in the source code, including offloading
features, mergeable buffers and number of queues.

> > Also that design uses the conventional virtio/vhost framework. In this
> > epf, are you implying instead of creating a Device A, create some sort
> > of vhost instead?
> 
> 
> Kind of, in order to address the above limitation, you probably want to 
> implement a vringh based netdevice and driver. It will work like, 
> instead of trying to represent a virtio-net device to endpoint, 
> represent a new type of network device, it uses two vringh ring instead 
> virtio ring. The vringh ring is usually used to implement the 
> counterpart of virtio driver. The advantages are obvious:
> 
> - no need to deal with two sets of features, config space etc.
> - network specific, from the point of endpoint linux, it's not a virtio 
> device, no need to care about transport stuffs or embedding internal 
> virtio-net specific data structures
> - reuse the exist codes (vringh) to avoid duplicated bugs, implementing 
> a virtqueue is kind of challenge

Now I see what you mean. The data copying part stays the same but that
data copying stays transparent to the whole vhost/virtio framework. You
want me to create a new type of network_device based on vhost stuff
instead of epf_virtio_device. Yeah, that is doable.

There could be performance overheads with using vhost. The
epf_virtio_device has the most straightforward way of calling callback
functions, while in vhost I would imagine there are some kinds of task
management/scheduling going on. But all this is congesture. I will write
out the code and see if throughput really dropped.

Thanks for clarifying.

Best,
Haotian

  reply	other threads:[~2019-09-04  0:55 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-23 21:31 [PATCH] pci: endpoint: functions: Add a virtnet EP function Haotian Wang
2019-08-26 10:51 ` Kishon Vijay Abraham I
2019-08-26 21:59   ` Haotian Wang
2019-08-27  8:12     ` Kishon Vijay Abraham I
2019-08-27 18:01       ` Haotian Wang
2019-08-30  6:11 ` Jason Wang
2019-08-30 23:06   ` Haotian Wang
2019-09-02  3:50     ` Jason Wang
2019-09-02 20:05       ` Haotian Wang
2019-09-03 10:42         ` Jason Wang
2019-09-04  0:55           ` Haotian Wang [this message]
2019-09-04 21:58           ` Haotian Wang
2019-09-05  2:56             ` Jason Wang
2019-09-05  3:28               ` Haotian Wang
2019-11-25 12:49               ` Kishon Vijay Abraham I
2019-11-26  9:58                 ` Jason Wang
2019-11-26 12:35                   ` Kishon Vijay Abraham I
2019-11-26 21:55                     ` Alan Mikhak
2019-11-26 22:01                       ` Alan Mikhak
2019-11-27  3:04                         ` Jason Wang
2019-09-03  6:25 ` Michael S. Tsirkin
2019-09-03 20:39   ` Haotian Wang
2019-09-05  7:07     ` Michael S. Tsirkin
2019-09-05 16:15       ` Haotian Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190904005522.2190-1-haotian.wang@sifive.com \
    --to=haotian.wang@sifive.com \
    --cc=bhelgaas@google.com \
    --cc=haotian.wang@duke.edu \
    --cc=kishon@ti.com \
    --cc=linux-pci@vger.kernel.org \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=mst@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).