From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08052C636C9 for ; Sat, 17 Jul 2021 09:31:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D100E613BB for ; Sat, 17 Jul 2021 09:31:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232840AbhGQJeE (ORCPT ); Sat, 17 Jul 2021 05:34:04 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:35930 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232832AbhGQJeE (ORCPT ); Sat, 17 Jul 2021 05:34:04 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1626514267; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=qpkQ64ZywmmGid3wm2wpO4W6jPhvZtPWa6GAG8VtSbo=; b=MATy+ByqVX4DaVmavjEfUmUmCl3NOGahIlBPDVIJK7POupTKbz/8lsWhYJjn/9lubYxuJB 3IeVsMe/DftqRs8r3gPRNd3BG7aVoGUSNVm12nmsQnhicrkfpUAp18ZLgZhPV7SgMvaFbG Z9ppiCrJzFrs6qRGTJqTTiO4q+s+WaA= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-541-fkIVUY3gMrKMjaSGLm28wg-1; Sat, 17 Jul 2021 05:31:03 -0400 X-MC-Unique: fkIVUY3gMrKMjaSGLm28wg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6D3B8100B3AC; Sat, 17 Jul 2021 09:31:00 +0000 (UTC) Received: from T590 (ovpn-12-83.pek2.redhat.com [10.72.12.83]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 6DC6360C0F; Sat, 17 Jul 2021 09:30:48 +0000 (UTC) Date: Sat, 17 Jul 2021 17:30:43 +0800 From: Ming Lei To: Bjorn Helgaas Cc: Jens Axboe , Christoph Hellwig , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, Greg Kroah-Hartman , Bjorn Helgaas , linux-pci@vger.kernel.org, Thomas Gleixner , Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch Subject: Re: [PATCH V4 1/3] driver core: mark device as irq affinity managed if any irq is managed Message-ID: References: <20210715120844.636968-2-ming.lei@redhat.com> <20210716200154.GA2113453@bjorn-Precision-5520> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210716200154.GA2113453@bjorn-Precision-5520> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org On Fri, Jul 16, 2021 at 03:01:54PM -0500, Bjorn Helgaas wrote: > On Thu, Jul 15, 2021 at 08:08:42PM +0800, Ming Lei wrote: > > irq vector allocation with managed affinity may be used by driver, and > > blk-mq needs this info because managed irq will be shutdown when all > > CPUs in the affinity mask are offline. > > > > The info of using managed irq is often produced by drivers(pci subsystem, > > Add space between "drivers" and "(". > s/pci/PCI/ OK. > > Does this "managed IRQ" (or "managed affinity", not sure what the > correct terminology is here) have something to do with devm? > > > platform device, ...), and it is consumed by blk-mq, so different subsystems > > are involved in this info flow > > Add period at end of sentence. OK. > > > Address this issue by adding one field of .irq_affinity_managed into > > 'struct device'. > > > > Suggested-by: Christoph Hellwig > > Signed-off-by: Ming Lei > > --- > > drivers/base/platform.c | 7 +++++++ > > drivers/pci/msi.c | 3 +++ > > include/linux/device.h | 1 + > > 3 files changed, 11 insertions(+) > > > > diff --git a/drivers/base/platform.c b/drivers/base/platform.c > > index 8640578f45e9..d28cb91d5cf9 100644 > > --- a/drivers/base/platform.c > > +++ b/drivers/base/platform.c > > @@ -388,6 +388,13 @@ int devm_platform_get_irqs_affinity(struct platform_device *dev, > > ptr->irq[i], ret); > > goto err_free_desc; > > } > > + > > + /* > > + * mark the device as irq affinity managed if any irq affinity > > + * descriptor is managed > > + */ > > + if (desc[i].is_managed) > > + dev->dev.irq_affinity_managed = true; > > } > > > > devres_add(&dev->dev, ptr); > > diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c > > index 3d6db20d1b2b..7ddec90b711d 100644 > > --- a/drivers/pci/msi.c > > +++ b/drivers/pci/msi.c > > @@ -1197,6 +1197,7 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs, > > if (flags & PCI_IRQ_AFFINITY) { > > if (!affd) > > affd = &msi_default_affd; > > + dev->dev.irq_affinity_managed = true; > > This is really opaque to me. I can't tell what the connection between > PCI_IRQ_AFFINITY and irq_affinity_managed is. Comment for PCI_IRQ_AFFINITY is 'Auto-assign affinity', 'irq_affinity_managed' basically means that irq's affinity is managed by kernel. What blk-mq needs is exactly if PCI_IRQ_AFFINITY is applied when allocating irq vectors. When PCI_IRQ_AFFINITY is used, genirq will shutdown the irq when all CPUs in the assigned affinity are offline, then blk-mq has to drain all in-flight IOs which will be completed via this irq and prevent new IO. That is the connection. Or you think 'irq_affinity_managed' isn't named well? > > AFAICT the only place irq_affinity_managed is ultimately used is > blk_mq_hctx_notify_offline(), and there's no obvious connection > between that and this code. I believe the connection is described in comment. Thanks, Ming