From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3231F155A34 for ; Thu, 25 Apr 2024 22:09:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714082999; cv=none; b=W4/yGkIHbsnxdv3jEXYGaqSaTacwPMvLxBQgLK7CZkLB9w4WGoEo/v/+C+yatxG7s7/42lgq6s3y56AVOLhEbn28+6HDiyh1+KmUwTC4HytD/+akLvi6N84W1HF/j3ivhJe+6TimnVB8lQWxRaY1BWfj903xOXkMYWSI2NuqdAg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714082999; c=relaxed/simple; bh=s6dvv+wZO+gtD7SAPcutRGVa3yA5bdoa6wKt9PgHyok=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: In-Reply-To:Content-Type:Content-Disposition; b=lUZXMfp+TsPxKm4eMtUy3kXTmVg/66G09CkP1do6omy++Xc6ydSCM4q0zvandE44tYpGlSRIDWChUMvyzILh7HgqAfBLrnciJNk62RJOC3D6qSt4SKb/fEHSsgQ/LwKvmSNoLKQ4cG7ltuGblVwcDX3F+tbGnhVff/UGJNUdSg8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Jzkz4lYO; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Jzkz4lYO" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1714082996; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4KSx9kDEzNAz8Jj0BPkb9tkkGWv5xnSIMDUuY70cD3Y=; b=Jzkz4lYOhsmOFK0Flky2+UlU4sS60OUSdWc+4UE6lURy073KkWvFBSiIjRvpv5vAByb42c +bJvZZJBEcYW2eUHsgdtpb3UfACHbFfL2+GN27xAdUIweuA3v2GHc/udPx1F8va6RxooV6 9JSd621qmN6KQYhVRmjJADBSww1+Kwg= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-322-CvmVzEOLPC2-NMn1TN9Brw-1; Thu, 25 Apr 2024 18:09:54 -0400 X-MC-Unique: CvmVzEOLPC2-NMn1TN9Brw-1 Received: by mail-wm1-f69.google.com with SMTP id 5b1f17b1804b1-41a14cb009eso10060015e9.3 for ; Thu, 25 Apr 2024 15:09:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714082993; x=1714687793; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4KSx9kDEzNAz8Jj0BPkb9tkkGWv5xnSIMDUuY70cD3Y=; b=Z6jkyqroe7sUgpUWViAR0TTYiTeUq4WWUVHYkxln+haQR1RvTgh/p7YruNJXobrvsw mTpHf7GNlCjgUoefCu5m9sEc6HuoEs/AGRFzBDOmlp43ah/nl9ACEy0uNxu5DoYClLKv puwUNqN1DOQA8GYm+H+8CjT+fr4qeitIjX2/usOP2sgIqJc66kuFR/sPOQVFdmPHZjHZ 0AyXmvHmiFZk3nMwTMMlaF8YR4MGSzctI0YsSZNbfj5PN1muOhWiPbu0agD0mpMdEZ7V iPx44EqK4gIbBU9Q1SVcNUV7TSKnuj5o6D1Iphr9BynzLMrZAohs4GdZoQOFyRywh2eF gHUw== X-Forwarded-Encrypted: i=1; AJvYcCXyU1o2EStt0UKSPQnZwxDCGv5uOmMKuU4ToGz5497A3nUD5wnIZtwsTrOqofUZxammCoSsrhiOQtjC/rUIOZSArLEJG7V6xXI6YLVU28s= X-Gm-Message-State: AOJu0Yzn1tXymG4BCKbeWyY2GYtl8opmICy5FH8b3/rRy/fzHgWVXvNr /c3XLmPL0Hv2ESwm/00Vg3I9KpMzq4gyrUDk8lPXEuMSuxvxAS7aEL2UbnQybUSkFP7c35OPimN m1v2+3hvHi5Bf1CdPMweATVNjs5pJYqdx1OnVPUfNbBLxZ7LiPfWXunz1rGVTC72c X-Received: by 2002:a05:600c:34ce:b0:417:f993:c614 with SMTP id d14-20020a05600c34ce00b00417f993c614mr683083wmq.22.1714082993469; Thu, 25 Apr 2024 15:09:53 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEau0iWvwN02hFGlCCga/0pnPpseToD1MCrYM7ZP9oxas3QaR+fyvLX6Gpl3uihNGGHmpl/Lw== X-Received: by 2002:a05:600c:34ce:b0:417:f993:c614 with SMTP id d14-20020a05600c34ce00b00417f993c614mr683068wmq.22.1714082993028; Thu, 25 Apr 2024 15:09:53 -0700 (PDT) Received: from redhat.com ([2a02:14f:17d:ffa:7b40:24cf:6484:4af6]) by smtp.gmail.com with ESMTPSA id je1-20020a05600c1f8100b0041adf358058sm8938820wmb.27.2024.04.25.15.09.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 25 Apr 2024 15:09:52 -0700 (PDT) Date: Thu, 25 Apr 2024 18:09:48 -0400 From: "Michael S. Tsirkin" To: Angus Chen Cc: Gavin Liu , "jasowang@redhat.com" , "virtualization@lists.linux.dev" , "xuanzhuo@linux.alibaba.com" , "linux-kernel@vger.kernel.org" , Heng Qi Subject: Re: =?utf-8?B?5Zue5aSN?= =?utf-8?Q?=3A?= [PATCH v5] vp_vdpa: don't allocate unused msix vectors Message-ID: <20240423053232-mutt-send-email-mst@kernel.org> References: <20240410033020.1310-1-yuxue.liu@jaguarmicro.com> <20240422080729-mutt-send-email-mst@kernel.org> <20240423043424-mutt-send-email-mst@kernel.org> Precedence: bulk X-Mailing-List: virtualization@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit On Tue, Apr 23, 2024 at 08:42:57AM +0000, Angus Chen wrote: > Hi mst. > > > -----Original Message----- > > From: Michael S. Tsirkin > > Sent: Tuesday, April 23, 2024 4:35 PM > > To: Gavin Liu > > Cc: jasowang@redhat.com; Angus Chen ; > > virtualization@lists.linux.dev; xuanzhuo@linux.alibaba.com; > > linux-kernel@vger.kernel.org; Heng Qi > > Subject: Re: 回复: [PATCH v5] vp_vdpa: don't allocate unused msix vectors > > > > On Tue, Apr 23, 2024 at 01:39:17AM +0000, Gavin Liu wrote: > > > On Wed, Apr 10, 2024 at 11:30:20AM +0800, lyx634449800 wrote: > > > > From: Yuxue Liu > > > > > > > > When there is a ctlq and it doesn't require interrupt callbacks,the > > > > original method of calculating vectors wastes hardware msi or msix > > > > resources as well as system IRQ resources. > > > > > > > > When conducting performance testing using testpmd in the guest os, it > > > > was found that the performance was lower compared to directly using > > > > vfio-pci to passthrough the device > > > > > > > > In scenarios where the virtio device in the guest os does not utilize > > > > interrupts, the vdpa driver still configures the hardware's msix > > > > vector. Therefore, the hardware still sends interrupts to the host os. > > > > > > >I just have a question on this part. How come hardware sends interrupts does > > not guest driver disable them? > > > > > > 1:Assuming the guest OS's Virtio device is using PMD mode, QEMU sets > > the call fd to -1 > > > 2:On the host side, the vhost_vdpa program will set > > vp_vdpa->vring[i].cb.callback to invalid > > > 3:Before the modification, the vp_vdpa_request_irq function does not > > check whether > > > vp_vdpa->vring[i].cb.callback is valid. Instead, it enables the > > hardware's MSIX > > > interrupts based on the number of queues of the device > > > > > > > So MSIX is enabled but why would it trigger? virtio PMD in poll mode > > presumably suppresses interrupts after all. > Virtio pmd is in the guest,but in host side,the msix is enabled,then the device will triger > Interrupt normally. I analysed this bug before,and I think gavin is right. > Did I make it clear? Not really. Guest disables interrupts presumably (it's polling) why does device still send them? > > > > > > > > > > > ----- Original Message ----- > > > From: Michael S. Tsirkin mst@redhat.com > > > Sent: April 22, 2024 20:09 > > > To: Gavin Liu gavin.liu@jaguarmicro.com > > > Cc: jasowang@redhat.com; Angus Chen angus.chen@jaguarmicro.com; > > virtualization@lists.linux.dev; xuanzhuo@linux.alibaba.com; > > linux-kernel@vger.kernel.org; Heng Qi hengqi@linux.alibaba.com > > > Subject: Re: [PATCH v5] vp_vdpa: don't allocate unused msix vectors > > > > > > > > > > > > External Mail: This email originated from OUTSIDE of the organization! > > > Do not click links, open attachments or provide ANY information unless you > > recognize the sender and know the content is safe. > > > > > > > > > On Wed, Apr 10, 2024 at 11:30:20AM +0800, lyx634449800 wrote: > > > > From: Yuxue Liu > > > > > > > > When there is a ctlq and it doesn't require interrupt callbacks,the > > > > original method of calculating vectors wastes hardware msi or msix > > > > resources as well as system IRQ resources. > > > > > > > > When conducting performance testing using testpmd in the guest os, it > > > > was found that the performance was lower compared to directly using > > > > vfio-pci to passthrough the device > > > > > > > > In scenarios where the virtio device in the guest os does not utilize > > > > interrupts, the vdpa driver still configures the hardware's msix > > > > vector. Therefore, the hardware still sends interrupts to the host os. > > > > > > I just have a question on this part. How come hardware sends interrupts does > > not guest driver disable them? > > > > > > > Because of this unnecessary > > > > action by the hardware, hardware performance decreases, and it also > > > > affects the performance of the host os. > > > > > > > > Before modification:(interrupt mode) > > > > 32: 0 0 0 0 PCI-MSI 32768-edge vp-vdpa[0000:00:02.0]-0 > > > > 33: 0 0 0 0 PCI-MSI 32769-edge vp-vdpa[0000:00:02.0]-1 > > > > 34: 0 0 0 0 PCI-MSI 32770-edge vp-vdpa[0000:00:02.0]-2 > > > > 35: 0 0 0 0 PCI-MSI 32771-edge vp-vdpa[0000:00:02.0]-config > > > > > > > > After modification:(interrupt mode) > > > > 32: 0 0 1 7 PCI-MSI 32768-edge vp-vdpa[0000:00:02.0]-0 > > > > 33: 36 0 3 0 PCI-MSI 32769-edge vp-vdpa[0000:00:02.0]-1 > > > > 34: 0 0 0 0 PCI-MSI 32770-edge vp-vdpa[0000:00:02.0]-config > > > > > > > > Before modification:(virtio pmd mode for guest os) > > > > 32: 0 0 0 0 PCI-MSI 32768-edge vp-vdpa[0000:00:02.0]-0 > > > > 33: 0 0 0 0 PCI-MSI 32769-edge vp-vdpa[0000:00:02.0]-1 > > > > 34: 0 0 0 0 PCI-MSI 32770-edge vp-vdpa[0000:00:02.0]-2 > > > > 35: 0 0 0 0 PCI-MSI 32771-edge vp-vdpa[0000:00:02.0]-config > > > > > > > > After modification:(virtio pmd mode for guest os) > > > > 32: 0 0 0 0 PCI-MSI 32768-edge vp-vdpa[0000:00:02.0]-config > > > > > > > > To verify the use of the virtio PMD mode in the guest operating > > > > system, the following patch needs to be applied to QEMU: > > > > https://lore.kernel.org/all/20240408073311.2049-1-yuxue.liu@jaguarmicr > > > > o.com > > > > > > > > Signed-off-by: Yuxue Liu > > > > Acked-by: Jason Wang > > > > Reviewed-by: Heng Qi > > > > --- > > > > V5: modify the description of the printout when an exception occurs > > > > V4: update the title and assign values to uninitialized variables > > > > V3: delete unused variables and add validation records > > > > V2: fix when allocating IRQs, scan all queues > > > > > > > > drivers/vdpa/virtio_pci/vp_vdpa.c | 22 ++++++++++++++++------ > > > > 1 file changed, 16 insertions(+), 6 deletions(-) > > > > > > > > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c > > > > b/drivers/vdpa/virtio_pci/vp_vdpa.c > > > > index df5f4a3bccb5..8de0224e9ec2 100644 > > > > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c > > > > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c > > > > @@ -160,7 +160,13 @@ static int vp_vdpa_request_irq(struct vp_vdpa > > *vp_vdpa) > > > > struct pci_dev *pdev = mdev->pci_dev; > > > > int i, ret, irq; > > > > int queues = vp_vdpa->queues; > > > > - int vectors = queues + 1; > > > > + int vectors = 1; > > > > + int msix_vec = 0; > > > > + > > > > + for (i = 0; i < queues; i++) { > > > > + if (vp_vdpa->vring[i].cb.callback) > > > > + vectors++; > > > > + } > > > > > > > > ret = pci_alloc_irq_vectors(pdev, vectors, vectors, PCI_IRQ_MSIX); > > > > if (ret != vectors) { > > > > @@ -173,9 +179,12 @@ static int vp_vdpa_request_irq(struct vp_vdpa > > *vp_vdpa) > > > > vp_vdpa->vectors = vectors; > > > > > > > > for (i = 0; i < queues; i++) { > > > > + if (!vp_vdpa->vring[i].cb.callback) > > > > + continue; > > > > + > > > > snprintf(vp_vdpa->vring[i].msix_name, > > VP_VDPA_NAME_SIZE, > > > > "vp-vdpa[%s]-%d\n", pci_name(pdev), i); > > > > - irq = pci_irq_vector(pdev, i); > > > > + irq = pci_irq_vector(pdev, msix_vec); > > > > ret = devm_request_irq(&pdev->dev, irq, > > > > vp_vdpa_vq_handler, > > > > 0, > > vp_vdpa->vring[i].msix_name, > > > > @@ -185,21 +194,22 @@ static int vp_vdpa_request_irq(struct vp_vdpa > > *vp_vdpa) > > > > "vp_vdpa: fail to request irq for > > vq %d\n", i); > > > > goto err; > > > > } > > > > - vp_modern_queue_vector(mdev, i, i); > > > > + vp_modern_queue_vector(mdev, i, msix_vec); > > > > vp_vdpa->vring[i].irq = irq; > > > > + msix_vec++; > > > > } > > > > > > > > snprintf(vp_vdpa->msix_name, VP_VDPA_NAME_SIZE, > > "vp-vdpa[%s]-config\n", > > > > pci_name(pdev)); > > > > - irq = pci_irq_vector(pdev, queues); > > > > + irq = pci_irq_vector(pdev, msix_vec); > > > > ret = devm_request_irq(&pdev->dev, irq, vp_vdpa_config_handler, > > 0, > > > > vp_vdpa->msix_name, vp_vdpa); > > > > if (ret) { > > > > dev_err(&pdev->dev, > > > > - "vp_vdpa: fail to request irq for vq %d\n", i); > > > > + "vp_vdpa: fail to request irq for config: %d\n", > > > > + ret); > > > > goto err; > > > > } > > > > - vp_modern_config_vector(mdev, queues); > > > > + vp_modern_config_vector(mdev, msix_vec); > > > > vp_vdpa->config_irq = irq; > > > > > > > > return 0; > > > > -- > > > > 2.43.0 > > > >