linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "liaochang (A)" <liaochang1@huawei.com>
To: Thomas Gleixner <tglx@linutronix.de>,
	xuyihang <xuyihang@huawei.com>, "Ming Lei" <ming.lei@redhat.com>
Cc: Peter Xu <peterx@redhat.com>, Christoph Hellwig <hch@lst.de>,
	Jason Wang <jasowang@redhat.com>,
	Luiz Capitulino <lcapitulino@redhat.com>,
	"Linux Kernel Mailing List" <linux-kernel@vger.kernel.org>,
	"Michael S. Tsirkin" <mst@redhat.com>, <minlei@redhat.com>
Subject: Re: Virtio-scsi multiqueue irq affinity
Date: Tue, 18 May 2021 09:37:28 +0800	[thread overview]
Message-ID: <eb893e8e-4805-1a04-d934-b7f821c64a8e@huawei.com> (raw)
In-Reply-To: <87r1ifkoq5.ffs@nanos.tec.linutronix.de>

Thomas,

在 2021/5/10 15:54, Thomas Gleixner 写道:
> Liao,
> 
> On Mon, May 10 2021 at 11:19, liaochang wrote:
>> 1.We have a machine with 36 CPUs,and assign several RT threads to last
>> two CPUs(CPU-34, CPU-35).
> 
> Which kind of machine? x86?
> 
>> 2.I/O device driver create single managed irq, the affinity of which
>> includes CPU-34 and CPU-35.
> 
> If that driver creates only a single managed interrupt, then the
> possible affinity of that interrupt spawns CPUs 0 - 35.
> 
> That's expected, but what is the effective affinity of that interrupt?
> 
> # cat /proc/irq/$N/effective_affinity
> 
> Also please provide the full output of
> 
> # cat /proc/interrupts
> 
> and point out which device we are talking about.

the mentioned managed irq is registered by virtio-scsi driver over PCI (on X86 platform, VM with 4 vCPU),
as shown below.

#lspci -vvv
...
00:04.0 SCSI storage controller: Virtio: Virtio SCSI
        Subsystem: Virtio: Device 0008
        Physical Slot: 4
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0
        Interrupt: pin A routed to IRQ 11
        Region 0: I/O ports at c140 [size=64]
        Region 1: Memory at febd2000 (32-bit, non-prefetchable) [size=4K]
        Region 4: Memory at fe004000 (64-bit, prefetchable) [size=16K]
        Capabilities: [98] MSI-X: Enable+ Count=4 Masked-
                Vector table: BAR=1 offset=00000000
                PBA: BAR=1 offset=00000800

#ls /sys/bus/pci/devices/0000:00:04.0/msi_irqs
33 34 35 36

#cat /proc/interrupts
...
 33:          0          0          0          0   PCI-MSI 65536-edge      virtio1-config
 34:          0          0          0          0   PCI-MSI 65537-edge      virtio1-control
 35:          0          0          0          0   PCI-MSI 65538-edge      virtio1-event
 36:      10637          0          0          0   PCI-MSI 65539-edge      virtio1-request

As you see, virtio-scsi allocates four MSI-X interrupts,from 33 to 36, and the last one supposes to
be triggered when the data of virtqueue is ready to receive, then its interrupt handler will raise
ksoftirqd to process I/O.If I assign FIFO RT thread to CPU0, a simple I/O operation issued by command
"dd if=/dev/zero of=/test.img bs=1K cout=1 oflag=direct,sync" will never finish.

Although that's expected, do you think it is sort of risky for Linux availability? Given in cloud
based environment,services from different teams may have serious influence to each other because of
lack of enough communication or good understanding about infrastructure, Thanks.

This problem arises when RT thread and ksoftirqd scheduled on the same CPU, beside placing RT thread
carefully, I also tried to set "rq_affinity" as 2, but the cost is a performance degradation of some
I/O benchmark by 10%~30%. So I wonder if the affinity of managed irq supports configuration from user space
or via kernel bootargs? Thanks.

> 
> Thanks,
> 
>         tglx
> .
> 
BR,
Liao, Chang

  reply	other threads:[~2021-05-18  1:37 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-18  6:21 Virtio-scsi multiqueue irq affinity Peter Xu
2019-03-23 17:15 ` Thomas Gleixner
2019-03-25  5:02   ` Peter Xu
2019-03-25  7:06     ` Ming Lei
2019-03-25  8:53       ` Thomas Gleixner
2019-03-25  9:43         ` Peter Xu
2019-03-25 13:27           ` Thomas Gleixner
2019-03-25  9:50         ` Ming Lei
2021-05-08  7:52           ` xuyihang
2021-05-08 12:26             ` Thomas Gleixner
2021-05-10  3:19               ` liaochang (A)
2021-05-10  7:54                 ` Thomas Gleixner
2021-05-18  1:37                   ` liaochang (A) [this message]
2021-05-10  8:48               ` xuyihang
2021-05-10 19:56                 ` Thomas Gleixner
2021-05-11 12:38                   ` xuyihang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=eb893e8e-4805-1a04-d934-b7f821c64a8e@huawei.com \
    --to=liaochang1@huawei.com \
    --cc=hch@lst.de \
    --cc=jasowang@redhat.com \
    --cc=lcapitulino@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=minlei@redhat.com \
    --cc=mst@redhat.com \
    --cc=peterx@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=xuyihang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).