From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F50FC48BE8 for ; Tue, 15 Jun 2021 04:01:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 089676023E for ; Tue, 15 Jun 2021 04:01:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231146AbhFOEDt (ORCPT ); Tue, 15 Jun 2021 00:03:49 -0400 Received: from mail.kernel.org ([198.145.29.99]:37462 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229749AbhFOEDf (ORCPT ); Tue, 15 Jun 2021 00:03:35 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id A63F861416; Tue, 15 Jun 2021 04:01:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623729691; bh=co+bBBntoA/WzLJOwEbpvbVwu5qdqmj36w7SQmp7cdc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nyjxc+AAwb00Lk3i47oKZMo0xJDyydMLZrzVjG2s4KJC9TuXz9gZw5E/rqBzL+t9g MOS6sb5kP1e+PZ+zX3YD80qzSRwUq6pUUNCw1G/1Eu0rPq5Yr5duGFmtuu7+CoT+aT lZjj+VkN5JY6m28A8pAv2ZMxs5AkosOHn3DHhtvliQRvj4tJefUjmfTcXEEtpICQJi 1CLls3GStyonM+WYExIBZJP8dfqPZN7PEBVgFH4bzVvC4IbN6PIu7p6JxYAOUP/PI3 y5RY8baGr9bqu9y5Ny40IGE/owoCudWvV4Q58agVQngiszni2w5+5DuWo7iQO07VmA hKyF5hU0+w2Ew== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Leon Romanovsky , Shay Drory , Saeed Mahameed Subject: [net-next 08/15] net/mlx5: Removing rmap per IRQ Date: Mon, 14 Jun 2021 21:01:16 -0700 Message-Id: <20210615040123.287101-9-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210615040123.287101-1-saeed@kernel.org> References: <20210615040123.287101-1-saeed@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Shay Drory In next patches, IRQs will be requested according to demand, instead of statically on driver boot. Also, currently, rmap is managed by the IRQ layer. rmap management will move out from the IRQ layer in future patches. Therefore, we want to remove the IRQ from the rmap, when IRQ is destroyed, instead of removing all the IRQs from the rmap when irq_table is destroyed. Signed-off-by: Shay Drory Reviewed-by: Leon Romanovsky Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/pci_irq.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c index 81b06b5693cd..6a5a6ec0ddbf 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c @@ -154,8 +154,14 @@ static void irq_release(struct kref *kref) { struct mlx5_irq *irq = container_of(kref, struct mlx5_irq, kref); + /* free_irq requires that affinity and rmap will be cleared + * before calling it. This is why there is asymmetry with set_rmap + * which should be called after alloc_irq but before request_irq. + */ irq_set_affinity_hint(irq->irqn, NULL); free_cpumask_var(irq->mask); + /* this line is releasing this irq from the rmap */ + irq_set_affinity_notifier(irq->irqn, NULL); free_irq(irq->irqn, &irq->nh); } @@ -378,6 +384,11 @@ int mlx5_irq_table_create(struct mlx5_core_dev *dev) return err; } +static void irq_table_clear_rmap(struct mlx5_irq_table *table) +{ + cpu_rmap_put(table->rmap); +} + void mlx5_irq_table_destroy(struct mlx5_core_dev *dev) { struct mlx5_irq_table *table = dev->priv.irq_table; @@ -386,11 +397,7 @@ void mlx5_irq_table_destroy(struct mlx5_core_dev *dev) if (mlx5_core_is_sf(dev)) return; - /* free_irq requires that affinity and rmap will be cleared - * before calling it. This is why there is asymmetry with set_rmap - * which should be called after alloc_irq but before request_irq. - */ - irq_clear_rmap(dev); + irq_table_clear_rmap(table); for (i = 0; i < table->nvec; i++) irq_release(&mlx5_irq_get(dev, i)->kref); pci_free_irq_vectors(dev->pdev); -- 2.31.1