linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: xuyihang <xuyihang@huawei.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ming Lei <ming.lei@redhat.com>
Cc: Peter Xu <peterx@redhat.com>, Christoph Hellwig <hch@lst.de>,
	Jason Wang <jasowang@redhat.com>,
	Luiz Capitulino <lcapitulino@redhat.com>,
	"Linux Kernel Mailing List" <linux-kernel@vger.kernel.org>,
	"Michael S. Tsirkin" <mst@redhat.com>, <minlei@redhat.com>,
	<liaochang1@huawei.com>
Subject: Re: Virtio-scsi multiqueue irq affinity
Date: Mon, 10 May 2021 16:48:31 +0800	[thread overview]
Message-ID: <963e38b0-a7d6-0b13-af89-81b03028d1ae@huawei.com> (raw)
In-Reply-To: <87zgx5l8ck.ffs@nanos.tec.linutronix.de>

Thomas,

在 2021/5/8 20:26, Thomas Gleixner 写道:
> Yihang,
>
> On Sat, May 08 2021 at 15:52, xuyihang wrote:
>> We are dealing with a scenario which may need to assign a default
>> irqaffinity for managed IRQ.
>>
>> Assume we have a full CPU usage RT thread running binded to a specific
>> CPU.
>>
>> In the mean while, interrupt handler registered by a device which is
>> ksoftirqd may never have a chance to run. (And we don't want to use
>> isolate CPU)
> A device cannot register and interrupt handler in ksoftirqd.
>
>> There could be a couple way to deal with this problem:
>>
>> 1. Adjust priority of ksoftirqd or RT thread, so the interrupt handler
>> could preempt
>>
>> RT thread. However, I am not sure whether it could have some side
>> effects or not.
>>
>> 2. Adjust interrupt CPU affinity or RT thread affinity. But managed IRQ
>> seems design to forbid user from manipulating interrupt affinity.
>>
>> It seems managed IRQ is coupled with user side application to me.
>>
>> Would you share your thoughts about this issue please?
> Can you please provide a more detailed description of your system?
>
>      - Number of CPUs
It's a 4 CPU x86 VM.
>      - Kernel version
This experiment run on linux-4.19
>      - Is NOHZ full enabled?
nohz=off
>      - Any isolation mechanisms enabled, and if so how are they
>        configured (e.g. on the kernel command line)?

Some core is isolated by command line (such as : isolcpus=3), and bind

with RT thread, and no other isolation configure.

>      - Number of queues in the multiqueue device

Only one queue.

[root@localhost ~]# cat /proc/interrupts | grep request
  27:       5499          0          0          0   PCI-MSI 
65539-edge      virtio1-request

This environment is a virtual machine and it's a virtio device, I guess it

should not make any difference in this case.

>      - Is the RT thread issuing I/O to the multiqueue device?

The RT thread doesn't issue IO.



We simplified the reproduce procedure:

1. Start a busy loopping program that have near 100% cpu usage, named print

./print 1 1 &


2. Make the program become realtime application

chrt -f -p 1 11514


3. Bind the RT process to the **managed irq** core

taskset -cpa 0 11514


4. Use dd to write to hard drive, and dd could not finish and return.

dd if=/dev/zero of=/test.img bs=1K count=1 oflag=direct,sync &


Since CPU is fully utilized by RT application, and hard drive driver choose

CPU0 to handle it's softirq, there is no chance for dd to run.

     PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM TIME+ COMMAND
   11514 root      -2   0    2228    740    676 R 100.0   0.0 3:26.70 print


If we make some change on this experiment:

1.  Make this RT application use less CPU time instead of 100%, the problem

disappear.

2, If we change rq_affinity to 2, in order to avoid handle softirq on 
the same

core of RT thread, the problem also disappear. However, this approach

result in about 10%-30% random write proformance deduction comparing

to rq_affinity = 1, since it may has better cache utilization.

echo 2 > /sys/block/sda/queue/rq_affinity


Therefore, I want to exclude some CPU from managed irq on boot parameter,

which has simliar approach to 11ea68f553e2 ("genirq, sched/isolation: 
Isolate

from handling managed interrupts").


Thanks,

Yihang


  parent reply	other threads:[~2021-05-10  8:48 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-18  6:21 Virtio-scsi multiqueue irq affinity Peter Xu
2019-03-23 17:15 ` Thomas Gleixner
2019-03-25  5:02   ` Peter Xu
2019-03-25  7:06     ` Ming Lei
2019-03-25  8:53       ` Thomas Gleixner
2019-03-25  9:43         ` Peter Xu
2019-03-25 13:27           ` Thomas Gleixner
2019-03-25  9:50         ` Ming Lei
2021-05-08  7:52           ` xuyihang
2021-05-08 12:26             ` Thomas Gleixner
2021-05-10  3:19               ` liaochang (A)
2021-05-10  7:54                 ` Thomas Gleixner
2021-05-18  1:37                   ` liaochang (A)
2021-05-10  8:48               ` xuyihang [this message]
2021-05-10 19:56                 ` Thomas Gleixner
2021-05-11 12:38                   ` xuyihang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=963e38b0-a7d6-0b13-af89-81b03028d1ae@huawei.com \
    --to=xuyihang@huawei.com \
    --cc=hch@lst.de \
    --cc=jasowang@redhat.com \
    --cc=lcapitulino@redhat.com \
    --cc=liaochang1@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=minlei@redhat.com \
    --cc=mst@redhat.com \
    --cc=peterx@redhat.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).