From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tetsuya Mukawa Subject: Re: [PATCH v5 1/3] vhost: Add callback and private data for vhost PMD Date: Mon, 21 Dec 2015 11:10:10 +0900 Message-ID: <56775F82.3090306@igel.co.jp> References: <1447392031-24970-3-git-send-email-mukawa@igel.co.jp> <1448355603-21275-1-git-send-email-mukawa@igel.co.jp> <1448355603-21275-2-git-send-email-mukawa@igel.co.jp> <20151217114223.GC29571@yliu-dev.sh.intel.com> <56737A5E.1030803@igel.co.jp> <20151218041536.GI29571@yliu-dev.sh.intel.com> <56738B5C.1030206@igel.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Cc: dev@dpdk.org, ann.zhuangyanying@huawei.com To: Rich Lane , Yuanhan Liu Return-path: Received: from mail-pf0-f178.google.com (mail-pf0-f178.google.com [209.85.192.178]) by dpdk.org (Postfix) with ESMTP id 0A576374C for ; Mon, 21 Dec 2015 03:10:13 +0100 (CET) Received: by mail-pf0-f178.google.com with SMTP id o64so79820417pfb.3 for ; Sun, 20 Dec 2015 18:10:12 -0800 (PST) In-Reply-To: List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 2015/12/19 3:01, Rich Lane wrote: > I'm using the vhost callbacks and struct virtio_net with the vhost PMD in a > few ways: > > 1. new_device/destroy_device: Link state change (will be covered by the > link status interrupt). > 2. new_device: Add first queue to datapath. > 3. vring_state_changed: Add/remove queue to datapath. > 4. destroy_device: Remove all queues (vring_state_changed is not called > when qemu is killed). > 5. new_device and struct virtio_net: Determine NUMA node of the VM. > > The vring_state_changed callback is necessary because the VM might not be > using the maximum number of RX queues. If I boot Linux in the VM it will > start out using one RX queue, which can be changed with ethtool. The DPDK > app in the host needs to be notified that it can start sending traffic to > the new queue. > > The vring_state_changed callback is also useful for guest TX queues to > avoid reading from an inactive queue. > > API I'd like to have: > > 1. Link status interrupt. > 2. New queue_state_changed callback. Unlike vring_state_changed this should > cover the first queue at new_device and removal of all queues at > destroy_device. > 3. Per-queue or per-device NUMA node info. Hi Rich and Yuanhan, As Rich described, some users needs more information when the interrupts comes. And the virtio_net structure contains the information. I guess it's very similar to interrupt handling of normal hardware. First, a interrupt comes, then an interrupt handler checks status register of the device to know actually what was happened. In vhost PMD case, reading status register equals reading virtio_net structure. So how about below specification? 1. The link status interrupt of vhost PMD will occurs when new_device, destroy_device and vring_state_changed events are happened. 2. Vhost PMD provides a function to let the users know virtio_net structure of the interrupted port. (Probably almost same as "rte_eth_vhost_portid2vdev" that I described in "[PATCH v5 3/3] vhost: Add helper function to convert port id to virtio device pointer") I guess what kind of information the users need will depends on their environments. So just providing virtio_net structure may be good. What do you think? Tetsuya, > > On Thu, Dec 17, 2015 at 8:28 PM, Tetsuya Mukawa wrote: > >> On 2015/12/18 13:15, Yuanhan Liu wrote: >>> On Fri, Dec 18, 2015 at 12:15:42PM +0900, Tetsuya Mukawa wrote: >>>> On 2015/12/17 20:42, Yuanhan Liu wrote: >>>>> On Tue, Nov 24, 2015 at 06:00:01PM +0900, Tetsuya Mukawa wrote: >>>>>> The vhost PMD will be a wrapper of vhost library, but some of vhost >>>>>> library APIs cannot be mapped to ethdev library APIs. >>>>>> Becasue of this, in some cases, we still need to use vhost library >> APIs >>>>>> for a port created by the vhost PMD. >>>>>> >>>>>> Currently, when virtio device is created and destroyed, vhost library >>>>>> will call one of callback handlers. The vhost PMD need to use this >>>>>> pair of callback handlers to know which virtio devices are connected >>>>>> actually. >>>>>> Because we can register only one pair of callbacks to vhost library, >> if >>>>>> the PMD use it, DPDK applications cannot have a way to know the >> events. >>>>>> This may break legacy DPDK applications that uses vhost library. To >> prevent >>>>>> it, this patch adds one more pair of callbacks to vhost library >> especially >>>>>> for the vhost PMD. >>>>>> With the patch, legacy applications can use the vhost PMD even if >> they need >>>>>> additional specific handling for virtio device creation and >> destruction. >>>>>> For example, legacy application can call >>>>>> rte_vhost_enable_guest_notification() in callbacks to change setting. >>>>> TBH, I never liked it since the beginning. Introducing two callbacks >>>>> for one event is a bit messy, and therefore error prone. >>>> I agree with you. >>>> >>>>> I have been thinking this occasionally last few weeks, and have came >>>>> up something that we may introduce another layer callback based on >>>>> the vhost pmd itself, by a new API: >>>>> >>>>> rte_eth_vhost_register_callback(). >>>>> >>>>> And we then call those new callback inside the vhost pmd new_device() >>>>> and vhost pmd destroy_device() implementations. >>>>> >>>>> And we could have same callbacks like vhost have, but I'm thinking >>>>> that new_device() and destroy_device() doesn't sound like a good name >>>>> to a PMD driver. Maybe a name like "link_state_changed" is better? >>>>> >>>>> What do you think of that? >>>> Yes, "link_state_changed" will be good. >>>> >>>> BTW, I thought it was ok that an DPDK app that used vhost PMD called >>>> vhost library APIs directly. >>>> But probably you may feel strangeness about it. Is this correct? >>> Unluckily, that's true :) >>> >>>> If so, how about implementing legacy status interrupt mechanism to vhost >>>> PMD? >>>> For example, an DPDK app can register callback handler like >>>> "examples/link_status_interrupt". >>>> >>>> Also, if the app doesn't call vhost library APIs directly, >>>> rte_eth_vhost_portid2vdev() will be needless, because the app doesn't >>>> need to handle virtio device structure anymore. >>>> >>>>> On the other hand, I'm still thinking is that really necessary to let >>>>> the application be able to call vhost functions like >> rte_vhost_enable_guest_notification() >>>>> with the vhost PMD driver? >>>> Basic concept of my patch is that vhost PMD will provides the features >>>> that vhost library provides. >>> I don't think that's necessary. Let's just treat it as a normal pmd >>> driver, having nothing to do with vhost library. >>> >>>> How about removing rte_vhost_enable_guest_notification() from "vhost >>>> library"? >>>> (I also not sure what are use cases) >>>> If we can do this, vhost PMD also doesn't need to take care of it. >>>> Or if rte_vhost_enable_guest_notification() will be removed in the >>>> future, vhost PMD is able to ignore it. >>> You could either call it in vhost-pmd (which you already have done that), >>> or ignore it in vhost-pmd, but dont' remove it from vhost library. >>> >>>> Please let me correct up my thinking about your questions. >>>> - Change concept of patch not to call vhost library APIs directly. >>>> These should be wrapped by ethdev APIs. >>>> - Remove rte_eth_vhost_portid2vdev(), because of above concept >> changing. >>>> - Implement legacy status changed interrupt to vhost PMD instead of >>>> using own callback mechanism. >>>> - Check if we can remove rte_vhost_enable_guest_notification() from >>>> vhost library. >>> So, how about making it __fare__ simple as the first step, to get merged >>> easily, that we don't assume the applications will call any vhost library >>> functions any more, so that we don't need the callback, and we don't need >>> the rte_eth_vhost_portid2vdev(), either. Again, just let it be a fare >>> normal (nothing special) pmd driver. (UNLESS, there is a real must, >> which >>> I don't see so far). >>> >>> Tetsuya, what do you think of that then? >> I agree with you. But will wait a few days. >> Because if someone wants to use it from vhost PMD, they probably will >> provides use cases. >> And if there are no use cases, let's do like above. >> >> Thanks, >> Tetsuya >>