From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from verein.lst.de ([213.95.11.211]:41823 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726563AbeIKOQz (ORCPT ); Tue, 11 Sep 2018 10:16:55 -0400 Date: Tue, 11 Sep 2018 11:22:54 +0200 From: Christoph Hellwig To: Thomas Gleixner Cc: Kashyap Desai , Ming Lei , Sumit Saxena , Ming Lei , Christoph Hellwig , Linux Kernel Mailing List , Shivasharan Srikanteshwara , linux-block Subject: Re: Affinity managed interrupts vs non-managed interrupts Message-ID: <20180911092254.GB10330@lst.de> References: <20180829084618.GA24765@ming.t460p> <300d6fef733ca76ced581f8c6304bac6@mail.gmail.com> <615d78004495aebc53807156d04d988c@mail.gmail.com> <486f94a563d63c4779498fe8829a546c@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii In-Reply-To: Sender: linux-block-owner@vger.kernel.org List-Id: linux-block@vger.kernel.org On Sat, Sep 01, 2018 at 12:48:46AM +0200, Thomas Gleixner wrote: > > We want some changes in current API which can allow us to pass flags > > (like *local numa affinity*) and cpu-msix mapping are from local numa node > > + effective cpu are spread across local numa node. > > What you really want is to split the vector space for your device into two > blocks. One for the regular per cpu queues and the other (16 or how many > ever) which are managed separately, i.e. spread out evenly. That needs some > extensions to the core allocation/management code, but that shouldn't be a > huge problem. Note that there are some other uses cases for multiple sets of affinity managed irqs. Various network devices insist on having separate TX vs RX interrupts for example. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9508FC04ABB for ; Tue, 11 Sep 2018 09:18:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 382312086A for ; Tue, 11 Sep 2018 09:18:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 382312086A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726897AbeIKOQ4 (ORCPT ); Tue, 11 Sep 2018 10:16:56 -0400 Received: from verein.lst.de ([213.95.11.211]:41823 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726563AbeIKOQz (ORCPT ); Tue, 11 Sep 2018 10:16:55 -0400 Received: by newverein.lst.de (Postfix, from userid 2407) id B91A568C8A; Tue, 11 Sep 2018 11:22:54 +0200 (CEST) Date: Tue, 11 Sep 2018 11:22:54 +0200 From: Christoph Hellwig To: Thomas Gleixner Cc: Kashyap Desai , Ming Lei , Sumit Saxena , Ming Lei , Christoph Hellwig , Linux Kernel Mailing List , Shivasharan Srikanteshwara , linux-block Subject: Re: Affinity managed interrupts vs non-managed interrupts Message-ID: <20180911092254.GB10330@lst.de> References: <20180829084618.GA24765@ming.t460p> <300d6fef733ca76ced581f8c6304bac6@mail.gmail.com> <615d78004495aebc53807156d04d988c@mail.gmail.com> <486f94a563d63c4779498fe8829a546c@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Sep 01, 2018 at 12:48:46AM +0200, Thomas Gleixner wrote: > > We want some changes in current API which can allow us to pass flags > > (like *local numa affinity*) and cpu-msix mapping are from local numa node > > + effective cpu are spread across local numa node. > > What you really want is to split the vector space for your device into two > blocks. One for the regular per cpu queues and the other (16 or how many > ever) which are managed separately, i.e. spread out evenly. That needs some > extensions to the core allocation/management code, but that shouldn't be a > huge problem. Note that there are some other uses cases for multiple sets of affinity managed irqs. Various network devices insist on having separate TX vs RX interrupts for example.