From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DCBDC31E40 for ; Mon, 12 Aug 2019 09:57:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 79BA72087B for ; Mon, 12 Aug 2019 09:57:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727672AbfHLJ5Z (ORCPT ); Mon, 12 Aug 2019 05:57:25 -0400 Received: from mx1.redhat.com ([209.132.183.28]:60164 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727485AbfHLJ5Y (ORCPT ); Mon, 12 Aug 2019 05:57:24 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 998563066FD9; Mon, 12 Aug 2019 09:57:24 +0000 (UTC) Received: from localhost (ovpn-8-23.pek2.redhat.com [10.72.8.23]) by smtp.corp.redhat.com (Postfix) with ESMTP id D59FE5D6B2; Mon, 12 Aug 2019 09:57:23 +0000 (UTC) From: Ming Lei To: Thomas Gleixner Cc: linux-kernel@vger.kernel.org, Ming Lei , Christoph Hellwig , Keith Busch , linux-nvme@lists.infradead.org, Jon Derrick , Jens Axboe Subject: [PATCH V2 1/3] genirq/affinity: Improve __irq_build_affinity_masks() Date: Mon, 12 Aug 2019 17:57:07 +0800 Message-Id: <20190812095709.25623-2-ming.lei@redhat.com> In-Reply-To: <20190812095709.25623-1-ming.lei@redhat.com> References: <20190812095709.25623-1-ming.lei@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.42]); Mon, 12 Aug 2019 09:57:24 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org One invariant of __irq_build_affinity_masks() is that all CPUs in the specified masks( cpu_mask AND node_to_cpumask for each node) should be covered during the spread. Even though all requested vectors have been reached, we still need to spread vectors among remained CPUs. The similar policy has been taken in case of 'numvecs <= nodes' already: So remove the following check inside the loop: if (done >= numvecs) break; Meantime assign at least 1 vector for remained nodes if 'numvecs' vectors have been handled already. Also, if the specified cpumask for one numa node is empty, simply not spread vectors on this node. Cc: Christoph Hellwig Cc: Keith Busch Cc: linux-nvme@lists.infradead.org, Cc: Jon Derrick Cc: Jens Axboe Signed-off-by: Ming Lei --- kernel/irq/affinity.c | 26 ++++++++++++++++++-------- 1 file changed, 18 insertions(+), 8 deletions(-) diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index 6fef48033f96..c7cca942bd8a 100644 --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -129,14 +129,26 @@ static int __irq_build_affinity_masks(unsigned int startvec, for_each_node_mask(n, nodemsk) { unsigned int ncpus, v, vecs_to_assign, vecs_per_node; - /* Spread the vectors per node */ - vecs_per_node = (numvecs - (curvec - firstvec)) / nodes; - /* Get the cpus on this node which are in the mask */ cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]); - - /* Calculate the number of cpus per vector */ ncpus = cpumask_weight(nmsk); + if (!ncpus) + continue; + + /* + * Calculate the number of cpus per vector + * + * Spread the vectors evenly per node. If the requested + * vector number has been reached, simply allocate one + * vector for each remaining node so that all nodes can + * be covered + */ + if (numvecs > done) + vecs_per_node = max_t(unsigned, + (numvecs - done) / nodes, 1); + else + vecs_per_node = 1; + vecs_to_assign = min(vecs_per_node, ncpus); /* Account for rounding errors */ @@ -156,13 +168,11 @@ static int __irq_build_affinity_masks(unsigned int startvec, } done += v; - if (done >= numvecs) - break; if (curvec >= last_affv) curvec = firstvec; --nodes; } - return done; + return done < numvecs ? done : numvecs; } /* -- 2.20.1