From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tetsuya Mukawa Subject: Re: [PATCH v4 2/2] vhost: Add VHOST PMD Date: Tue, 24 Nov 2015 13:41:42 +0900 Message-ID: <5653EA86.70106@igel.co.jp> References: <1447046221-20811-3-git-send-email-mukawa@igel.co.jp> <1447392031-24970-1-git-send-email-mukawa@igel.co.jp> <1447392031-24970-3-git-send-email-mukawa@igel.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Cc: yuanhan.liu@intel.com, dev@dpdk.org, ann.zhuangyanying@huawei.com To: Rich Lane Return-path: Received: from mail-pa0-f45.google.com (mail-pa0-f45.google.com [209.85.220.45]) by dpdk.org (Postfix) with ESMTP id ED42758D8 for ; Tue, 24 Nov 2015 05:41:45 +0100 (CET) Received: by pacej9 with SMTP id ej9so8458625pac.2 for ; Mon, 23 Nov 2015 20:41:45 -0800 (PST) In-Reply-To: List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 2015/11/21 9:15, Rich Lane wrote: > On Thu, Nov 12, 2015 at 9:20 PM, Tetsuya Mukawa wrote: > >> +static uint16_t >> +eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) >> +{ >> > ... > >> + >> + /* Enqueue packets to guest RX queue */ >> + nb_tx = rte_vhost_enqueue_burst(r->device, >> + r->virtqueue_id, bufs, nb_bufs); >> + >> + r->tx_pkts += nb_tx; >> + r->err_pkts += nb_bufs - nb_tx; >> > I don't think a full TX queue is counted as an error by physical NIC PMDs > like ixgbe and i40e. It is counted as an error by the af_packet, pcap, and > ring PMDs. I'd suggest not counting it as an error because it's a common > and expected condition, and the application might just retry the TX later. Hi Rich, Thanks for commenting. I will count it as "imissed". > Are the byte counts left out because it would be a performance hit? It > seems like it would be a minimal cost given how much we're already touching > each packet. I just ignore it for performance. But you are correct, I will add it. > >> +static int >> +new_device(struct virtio_net *dev) >> +{ >> > ... > >> + >> + if ((dev->virt_qp_nb < internal->nb_rx_queues) || >> + (dev->virt_qp_nb < internal->nb_tx_queues)) { >> + RTE_LOG(INFO, PMD, "Not enough queues\n"); >> + return -1; >> + } >> > Would it make sense to take the minimum of the guest and host queuepairs > and use that below in place of nb_rx_queues/nb_tx_queues? That way the host > can support a large maximum number of queues and each guest can choose how > many it wants to use. The host application will receive vring_state_changed > callbacks for each queue the guest activates. > Thanks for checking this. I agree with you. After reading your comment, here is my guess for this PMD behavior. This PMD should assume that virtio-net device(QEMU) has same or more queues than specified in vhost PMD option. In a case that the assumption is break, application should handle vring_state_change callback correctly. (Then stop accessing to disabled queues not to spend CPU power.) Anyway, I will just remove above if-condition, because of above PMD assumption. Thanks, Tetsuya