All of lore.kernel.org
 help / color / mirror / Atom feed
From: "chenxiang (M)" <chenxiang66@hisilicon.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: <lkml@sdf.org>, <tglx@linutronix.de>, <kbusch@kernel.org>,
	"axboe@kernel.dk" <axboe@kernel.dk>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	Linuxarm <linuxarm@huawei.com>,
	John Garry <john.garry@huawei.com>
Subject: Re: The irq Affinity is changed after the patch(Fixes: b1a5a73e64e9 ("genirq/affinity: Spread vectors on node according to nr_cpu ratio"))
Date: Tue, 19 Nov 2019 11:32:48 +0800	[thread overview]
Message-ID: <75a630b2-029b-0a3e-79a9-d11143a033ad@hisilicon.com> (raw)
In-Reply-To: <20191119031700.GE391@ming.t460p>



在 2019/11/19 11:17, Ming Lei 写道:
> On Tue, Nov 19, 2019 at 11:05:55AM +0800, chenxiang (M) wrote:
>> Hi Ming,
>>
>> 在 2019/11/19 9:42, Ming Lei 写道:
>>> On Tue, Nov 19, 2019 at 09:25:30AM +0800, chenxiang (M) wrote:
>>>> Hi,
>>>>
>>>> There are 128 cpus and 16 irqs for SAS controller in my system, and there
>>>> are 4 Nodes, every 32 cpus are for one node (cpu0-31 for node0, cpu32-63 for
>>>> node1, cpu64-95 for node2, cpu96-127 for node3).
>>>> We use function pci_alloc_irq_vectors_affinity() to set the affinity of
>>>> irqs.
>>>>
>>>> I find that  before the patch (Fixes: b1a5a73e64e9 ("genirq/affinity: Spread
>>>> vectors on node according to nr_cpu ratio")), the relationship between irqs
>>>> and cpus is: irq0 bind to cpu0-7, irq1 bind to cpu8-15,
>>>> irq2 bind to cpu16-23, irq3 bind to cpu24-31,irq4 bind to cpu32-39... irq15
>>>> bind to cpu120-127. But after the patch, the relationship is changed: irq0
>>>> bind to cpu32-39,
>>>> irq1 bind to cpu40-47, ..., irq11 bind to cpu120-127, irq12 bind to cpu0-7,
>>>> irq13 bind to cpu8-15, irq14 bind to cpu16-23, irq15 bind to cpu24-31.
>>>>
>>>> I notice that before calling the sort() in function alloc_nodes_vectors(),
>>>> the id of array node_vectors[] is from 0,1,2,3. But after function sort(),
>>>> the index of array node_vectors[] is 1,2,3,0.
>>>> But i think it sorts according to the numbers of cpus in those nodes, so it
>>>> should be the same as before calling sort() as the numbers of cpus in every
>>>> node are 32.
>>> Maybe there are more non-present CPUs covered by node 0.
>>>
>>> Could you provide the following log?
>>>
>>> 1) lscpu
>>>
>>> 2) ./dump-io-irq-affinity $PCI_ID_SAS
>>>
>>> 	http://people.redhat.com/minlei/tests/tools/dump-io-irq-affinity
>>>
>>> You need to figure out the PCI ID(the 1st column of lspci output) of the SAS
>>> controller via lspci.
>> Sorry, I can't access the link you provide, but i can provide those irqs'
>> affinity in the attachment.
>> I also write a small testcase, and find id is 1, 2, 3, 0 after calling
>> sort() .
> Runtime log from /proc/interrupts isn't useful for investigating
> affinity allocation issue, please use the attached script for collecting
> log.

Note: there are 32 irqs for SAS controller, irq0-15 are other interrupts 
(such as phy up/down/channel....), only irq 16-31 are cq interrupts 
which is processed by function pci_alloc_irq_vectors_affinity().
The log is as follows:

Euler:~ # ./dump-io-irq-affinity 74:02.0
kernel version:
Linux Euler 5.4.0-rc2-14683-g74684b1-dirty #224 SMP PREEMPT Mon Nov 18 
18:54:27 CST 2019 aarch64 aarch64 aarch64 GNU/Linux
PCI name is 74:02.0: sdd
cat: /proc/irq/65/smp_affinity_list: No such file or directory
cat: /proc/irq/65/effective_affinity_list: No such file or directory
     irq 65, cpu list , effective list
     irq 66, cpu list 0-31, effective list 0
     irq 67, cpu list 0-31, effective list 0
cat: /proc/irq/68/smp_affinity_list: No such file or directory
cat: /proc/irq/68/effective_affinity_list: No such file or directory
     irq 68, cpu list , effective list
cat: /proc/irq/69/smp_affinity_list: No such file or directory
cat: /proc/irq/69/effective_affinity_list: No such file or directory
     irq 69, cpu list , effective list
cat: /proc/irq/70/smp_affinity_list: No such file or directory
cat: /proc/irq/70/effective_affinity_list: No such file or directory
     irq 70, cpu list , effective list
cat: /proc/irq/71/smp_affinity_list: No such file or directory
cat: /proc/irq/71/effective_affinity_list: No such file or directory
     irq 71, cpu list , effective list
cat: /proc/irq/72/smp_affinity_list: No such file or directory
cat: /proc/irq/72/effective_affinity_list: No such file or directory
     irq 72, cpu list , effective list
cat: /proc/irq/73/smp_affinity_list: No such file or directory
cat: /proc/irq/73/effective_affinity_list: No such file or directory
     irq 73, cpu list , effective list
cat: /proc/irq/74/smp_affinity_list: No such file or directory
cat: /proc/irq/74/effective_affinity_list: No such file or directory
     irq 74, cpu list , effective list
cat: /proc/irq/75/smp_affinity_list: No such file or directory
cat: /proc/irq/75/effective_affinity_list: No such file or directory
     irq 75, cpu list , effective list
     irq 76, cpu list 0-31, effective list 0
cat: /proc/irq/77/smp_affinity_list: No such file or directory
cat: /proc/irq/77/effective_affinity_list: No such file or directory
     irq 77, cpu list , effective list
cat: /proc/irq/78/smp_affinity_list: No such file or directory
cat: /proc/irq/78/effective_affinity_list: No such file or directory
     irq 78, cpu list , effective list
cat: /proc/irq/79/smp_affinity_list: No such file or directory
cat: /proc/irq/79/effective_affinity_list: No such file or directory
     irq 79, cpu list , effective list
cat: /proc/irq/80/smp_affinity_list: No such file or directory
cat: /proc/irq/80/effective_affinity_list: No such file or directory
     irq 80, cpu list , effective list
     irq 81, cpu list 32-39, effective list 32
     irq 82, cpu list 40-47, effective list 40
     irq 83, cpu list 48-55, effective list 48
     irq 84, cpu list 56-63, effective list 56
     irq 85, cpu list 64-71, effective list 64
     irq 86, cpu list 72-79, effective list 72
     irq 87, cpu list 80-87, effective list 80
     irq 88, cpu list 88-95, effective list 88
     irq 89, cpu list 96-103, effective list 96
     irq 90, cpu list 104-111, effective list 104
     irq 91, cpu list 112-119, effective list 112
     irq 92, cpu list 120-127, effective list 120
     irq 93, cpu list 0-7, effective list 0
     irq 94, cpu list 8-15, effective list 8
     irq 95, cpu list 16-23, effective list 16
     irq 96, cpu list 24-31, effective list 24


>
>
> Thanks,
> Ming



  reply	other threads:[~2019-11-19  3:32 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-19  1:25 The irq Affinity is changed after the patch(Fixes: b1a5a73e64e9 ("genirq/affinity: Spread vectors on node according to nr_cpu ratio")) chenxiang (M)
2019-11-19  1:42 ` Ming Lei
     [not found]   ` <a8a89884-8323-ff70-f35e-0fcf5d7afefc@hisilicon.com>
2019-11-19  3:17     ` Ming Lei
2019-11-19  3:32       ` chenxiang (M) [this message]
2019-11-19  6:56         ` Ming Lei
2019-12-08  7:42     ` George Spelvin
2019-12-09  2:58       ` chenxiang (M)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=75a630b2-029b-0a3e-79a9-d11143a033ad@hisilicon.com \
    --to=chenxiang66@hisilicon.com \
    --cc=axboe@kernel.dk \
    --cc=john.garry@huawei.com \
    --cc=kbusch@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=lkml@sdf.org \
    --cc=ming.lei@redhat.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.