From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BBD5C433F5 for ; Mon, 17 Jan 2022 09:14:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233773AbiAQJOT (ORCPT ); Mon, 17 Jan 2022 04:14:19 -0500 Received: from dfw.source.kernel.org ([139.178.84.217]:33280 "EHLO dfw.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233659AbiAQJOS (ORCPT ); Mon, 17 Jan 2022 04:14:18 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 73930610A3 for ; Mon, 17 Jan 2022 09:14:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D1400C36AE3; Mon, 17 Jan 2022 09:14:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1642410856; bh=vNbEnloySnrtlVpxJfGqsKPZXvmNCCYf0XnY60tJRDo=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=bP2AZqjpM/DfeLjO1BSNoSH+AjDV8v7oeTrD57M6vlH26vZlBLKNPcC3PFePcT7qe OxOPoe8UyqVaRrZKWyyCaWvN4QjhLkBehgE8YS9gQX5CgmIDZ2caY+uPiz15VFSbKB Qa6XOm3YYjLUDiTc+JFlmw+1sS1XMpWLHGjA5CdPMBqpDJ+f5XGofbYi3AlvClKMzt 8g+CecrsnF28HxKnmT3o3jFH9gSQS3MjYTk/sNhFU/WRrB5SBxQRBaBFN8Mhpb01+6 wM1frbjC3V69d2t2xcjzp54SY6j8MDxtrmxnuTJosQ0h2hQMgZKfBD7Kx5Hly6BC/M amfjmCXSZMamg== Received: from sofa.misterjones.org ([185.219.108.64] helo=why.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1n9O5e-000tSH-2P; Mon, 17 Jan 2022 09:14:14 +0000 Date: Mon, 17 Jan 2022 09:14:13 +0000 Message-ID: <87k0ey9122.wl-maz@kernel.org> From: Marc Zyngier To: John Garry Cc: Thomas Gleixner , chenxiang , Shameer Kolothum , "linux-kernel@vger.kernel.org" , "liuqi (BA)" Subject: Re: PCI MSI issue for maxcpus=1 In-Reply-To: <87v8yjyjc0.wl-maz@kernel.org> References: <78615d08-1764-c895-f3b7-bfddfbcbdfb9@huawei.com> <87a6g8vp8k.wl-maz@kernel.org> <19d55cdf-9ef7-e4a3-5ae5-0970f0d7751b@huawei.com> <87v8yjyjc0.wl-maz@kernel.org> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: john.garry@huawei.com, tglx@linutronix.de, chenxiang66@hisilicon.com, shameerali.kolothum.thodi@huawei.com, linux-kernel@vger.kernel.org, liuqi115@huawei.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, 16 Jan 2022 12:07:59 +0000, Marc Zyngier wrote: > > On Fri, 07 Jan 2022 11:24:38 +0000, > John Garry wrote: > > > > Hi Marc, > > > > >> So it's the driver call to pci_alloc_irq_vectors_affinity() which > > >> errors [1]: > > >> > > >> [ 9.619070] hisi_sas_v3_hw: probe of 0000:74:02.0 failed with error -2 > > > Can you log what error is returned from pci_alloc_irq_vectors_affinity()? > > > > -EINVAL > > > > > > > >> Some details: > > >> - device supports 32 MSI > > >> - min and max msi for that function is 17 and 32, respect. > > > This 17 is a bit odd, owing to the fact that MultiMSI can only deal > > > with powers of 2. You will always allocate 32 in this case. Not sure > > > why that'd cause an issue though. Unless... > > > > Even though 17 is the min, we still try for nvec=32 in > > msi_capability_init() as possible CPUs is 96. > > > > > > > >> - affd pre and post are 16 and 0, respect. > > >> > > >> I haven't checked to see what the issue is yet and I think that the > > >> pci_alloc_irq_vectors_affinity() usage is ok... > > > ... we really end-up with desc->nvec_used == 32 and try to activate > > > past vector 17 (which is likely to fail). Could you please check this? > > > > Yeah, that looks to fail. Reason being that in the GIC ITS driver when > > we try to activate the irq for this managed interrupt all cpus in the > > affinity mask are offline. Calling its_irq_domain_activate() -> > > its_select_cpu() it gives cpu=nr_cpu_ids. The affinity mask for that > > interrupt is 24-29. > > I guess that for managed interrupts, it shouldn't matter, as these > interrupts should only be used when the relevant CPUs come online. > > Would something like below help? Totally untested, as I don't have a > Multi-MSI capable device that I can plug in a GICv3 system (maybe I > should teach that to a virtio device...). Actually, if the CPU online status doesn't matter for managed affinity interrupts, then the correct fix is this: diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index d25b7a864bbb..af4e72a6be63 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -1624,7 +1624,7 @@ static int its_select_cpu(struct irq_data *d, cpu = cpumask_pick_least_loaded(d, tmpmask); } else { - cpumask_and(tmpmask, irq_data_get_affinity_mask(d), cpu_online_mask); + cpumask_copy(tmpmask, irq_data_get_affinity_mask(d)); /* If we cannot cross sockets, limit the search to that node */ if ((its_dev->its->flags & ITS_FLAGS_WORKAROUND_CAVIUM_23144) && Thanks, M. -- Without deviation from the norm, progress is not possible.