From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756677AbdEUScC (ORCPT ); Sun, 21 May 2017 14:32:02 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:34598 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751315AbdEUScB (ORCPT ); Sun, 21 May 2017 14:32:01 -0400 Date: Sun, 21 May 2017 20:31:47 +0200 (CEST) From: Thomas Gleixner To: Christoph Hellwig cc: Jens Axboe , Keith Busch , linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 2/7] genirq/affinity: assign vectors to all present CPUs In-Reply-To: <20170519085756.29742-3-hch@lst.de> Message-ID: References: <20170519085756.29742-1-hch@lst.de> <20170519085756.29742-3-hch@lst.de> User-Agent: Alpine 2.20 (DEB 67 2015-01-07) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 19 May 2017, Christoph Hellwig wrote: > - /* Stabilize the cpumasks */ > - get_online_cpus(); How is that protected against physical CPU hotplug? Physical CPU hotplug manipulates the present mask. > - nodes = get_nodes_in_cpumask(cpu_online_mask, &nodemsk); > + nodes = get_nodes_in_cpumask(cpu_present_mask, &nodemsk); > +static int __init irq_build_cpumap(void) > +{ > + int node, cpu; > + > + for (node = 0; node < nr_node_ids; node++) { > + if (!zalloc_cpumask_var(&node_to_present_cpumask[node], > + GFP_KERNEL)) > + panic("can't allocate early memory\n"); > + } > > - return min(cpus, vecs) + resv; > + for_each_present_cpu(cpu) { > + node = cpu_to_node(cpu); > + cpumask_set_cpu(cpu, node_to_present_cpumask[node]); > + } This mask needs updating on physical hotplug as well. Thanks, tglx