linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Garry <john.garry@huawei.com>
To: Roger Willcocks <roger@filmlight.ltd.uk>
Cc: <Don.Brace@microchip.com>, <mwilck@suse.com>,
	<buczek@molgen.mpg.de>, <martin.petersen@oracle.com>,
	<ming.lei@redhat.com>, <jejb@linux.vnet.ibm.com>,
	<linux-scsi@vger.kernel.org>, <hare@suse.de>,
	<Kevin.Barnett@microchip.com>, <pmenzel@molgen.mpg.de>,
	<hare@suse.com>
Subject: Re: [PATCH] scsi: scsi_host_queue_ready: increase busy count early
Date: Tue, 23 Feb 2021 16:17:04 +0000	[thread overview]
Message-ID: <81afb054-fe31-3e67-0087-980a31d5adb6@huawei.com> (raw)
In-Reply-To: <BF6685B6-B23F-49BC-B905-6ABE6FD3F44D@filmlight.ltd.uk>

On 23/02/2021 14:06, Roger Willcocks wrote:
> 
> 
>> On 23 Feb 2021, at 08:57, John Garry <john.garry@huawei.com> wrote:
>>
>> On 22/02/2021 14:23, Roger Willcocks wrote:
>>> FYI we have exactly this issue on a machine here running CentOS 8.3 (kernel 4.18.0-240.1.1) (so presumably this happens in RHEL 8 too.)
>>> Controller is MSCC / Adaptec 3154-8i16e driving 60 x 12TB HGST drives configured as five x twelve-drive raid-6, software striped using md, and formatted with xfs.
>>> Test software writes to the array using multiple threads in parallel.
>>> The smartpqi driver would report controller offline within ten minutes or so, with status code 0x6100c
>>> Changed the driver to set 'nr_hw_queues = 1’ and then tested by filling the array with random files (which took a couple of days), which completed fine, so it looks like that one-line change fixes it.
>>
>> That just makes the driver single-queue.
>>
> 
> All I can say is it fixes the problem. Write performance is two or three percent faster than CentOS 6.5 on the same hardware.
> 
> 
>> As such, since the driver uses blk_mq_unique_tag_to_hwq(), only hw queue #0 will ever be used in the driver.
>>
>> And then, since the driver still spreads MSI-X interrupt vectors over all CPUs [from pci_alloc_vectors(PCI_IRQ_AFFINITY)], if CPUs associated with HW queue #0 are offlined (probably just cpu0), there is no CPUs available to service queue #0 interrupt. That's what I think would happen, from a quick glance at the code.
>>
> 
> Surely that would be an issue even if it used multiple queues (one of which would be queue #0) ?
> 

Well, no. Because there is currently a symmetry between HW queue context 
in the block layer and the HW queues in the LLDD. So if hwq0 were mapped 
to cpu0 only, if cpu0 is offline, block layer will not send commands on 
hwq0. By setting nr_hw_queues=1, that symmetry breaks - every cpu tries 
to send on hwq0, but irq core code will disable hwq0 interrupt when cpu0 
is offline - that's because it is managed.

That's how it looks to me - I did not check the LLDD code too closely. 
Please discuss with Don.

Thanks,
John

>>
>>> Would, of course, be helpful if this was back-ported.
>>> —
>>> Roger
> 
> 
> .
> 


  reply	other threads:[~2021-02-23 16:19 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-20 18:45 [PATCH] scsi: scsi_host_queue_ready: increase busy count early mwilck
2021-01-20 20:26 ` John Garry
2021-01-21 12:01   ` Donald Buczek
2021-01-21 12:35     ` John Garry
2021-01-21 12:44       ` Donald Buczek
2021-01-21 13:05         ` John Garry
2021-01-21 23:32           ` Martin Wilck
2021-03-11 16:36             ` Donald Buczek
2021-02-01 22:44           ` Don.Brace
2021-02-02 20:04           ` Don.Brace
2021-02-02 20:48             ` Martin Wilck
2021-02-03  8:49               ` John Garry
2021-02-03  8:58                 ` Paul Menzel
2021-02-03 15:30                   ` Don.Brace
2021-02-03 15:56               ` Don.Brace
2021-02-03 18:25                 ` John Garry
2021-02-03 19:01                   ` Don.Brace
2021-02-22 14:23                 ` Roger Willcocks
2021-02-23  8:57                   ` John Garry
2021-02-23 14:06                     ` Roger Willcocks
2021-02-23 16:17                       ` John Garry [this message]
2021-03-01 14:51                   ` Paul Menzel
2021-01-21  9:07 ` Donald Buczek
2021-01-21 10:05   ` Martin Wilck
2021-01-22  0:14     ` Martin Wilck
2021-01-22  3:23 ` Ming Lei
2021-01-22 14:05   ` Martin Wilck

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=81afb054-fe31-3e67-0087-980a31d5adb6@huawei.com \
    --to=john.garry@huawei.com \
    --cc=Don.Brace@microchip.com \
    --cc=Kevin.Barnett@microchip.com \
    --cc=buczek@molgen.mpg.de \
    --cc=hare@suse.com \
    --cc=hare@suse.de \
    --cc=jejb@linux.vnet.ibm.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=ming.lei@redhat.com \
    --cc=mwilck@suse.com \
    --cc=pmenzel@molgen.mpg.de \
    --cc=roger@filmlight.ltd.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).