All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sumit Saxena <sumit.saxena@broadcom.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>,
	James Bottomley <James.Bottomley@hansenpartnership.com>,
	Linux SCSI List <linux-scsi@vger.kernel.org>,
	linux-block@vger.kernel.org, Jens Axboe <axboe@kernel.dk>,
	Sathya Prakash <sathya.prakash@broadcom.com>,
	Chaitra P B <chaitra.basappa@broadcom.com>,
	Suganath Prabu Subramani  <suganath-prabu.subramani@broadcom.com>,
	Kashyap Desai <kashyap.desai@broadcom.com>,
	Shivasharan S <shivasharan.srikanteshwara@broadcom.com>,
	"Ewan D . Milne" <emilne@redhat.com>,
	Christoph Hellwig <hch@lst.de>, Hannes Reinecke <hare@suse.de>,
	Bart Van Assche <bart.vanassche@wdc.com>
Subject: Re: [PATCH 5/6] scsi: core: don't limit per-LUN queue depth for SSD when HBA needs
Date: Fri, 24 Jan 2020 18:13:40 +0530	[thread overview]
Message-ID: <CAL2rwxrLVeAZmFPGvOOqDrea8Nh3p1Cb5BSd4r4noC_8AzRtHg@mail.gmail.com> (raw)
In-Reply-To: <20200124015957.GA17387@ming.t460p>

On Fri, Jan 24, 2020 at 7:30 AM Ming Lei <ming.lei@redhat.com> wrote:
>
> Hi Martin,
>
> On Thu, Jan 23, 2020 at 08:21:42PM -0500, Martin K. Petersen wrote:
> >
> > Ming,
> >
> > > However, it depends on if the target device returns the congestion to
> > > host. From my observation, looks there isn't such feedback from NVMe
> > > target.
> >
> > It happens all the time with SCSI devices. It is imperative that this
> > keeps working.
> >
> > > Even if there was such SSD target which provides such congestion
> > > feedback, bypassing .device_busy won't cause big effect too since
> > > blk-mq's SCHED_RESTART will retry this IO returning STS_RESOURCE only
> > > after another in-flight one is completed.
> >
> > The reason we back off is that it allows the device to recover by
> > temporarily reducing its workload. In addition, the lower queue depth
> > alleviates the risk of commands timing out leading to application I/O
> > failures.
>
> The timeout risk may only happen when driver/device doesn't return
> congestion feedback, meantime the host queue depth is big enough.
>
> So far we don't see such issue on NVMe which hw queue depth is 1023, and
> the hw queue count is often 32+, and not see such timeout report
> when there are so many inflight IOs(32 * 1023) on single LUN.
>
> Also megaraid sas's queue depth is much less than (32 * 1023), so it
> seems much unlikely to happen.
>
> Megaraid guys, could you clarify if it is one issue? Kashyap, Sumit
> and Shivasharan?

Hi Ming, Martin,

megaraid_sas driver does not enable “.track_queue_depth”, so
megaraid_sas adapters never used QUEUE FULL interface of Linux SCSI
layer. Most of the handling of QUEUE FULL is managed by MegaRAID
controller Firmware and it also manages reducing drive level QD (like
ramp up/down).

"mpt3sas" adapters support QUEUE FULL based on IOCCapabilities of
Firmware.  Default configuration is Firmware will manage QUEUE FULL.
This is not same as Linux SCSI level handling. It is delayed retry in
Firmware. It means, we should not expect IO timeout in case of QUEUE
FULL from device since firmware can handle it as delayed retry. User
can disable Firmware handling QUEUE FULL condition (through customized
firmware) and allow QUEUE FULL return back to SCSI layer.  This
feature is called “MPI2_IOCFACTS_CAPABILITY_TASK_SET_FULL_HANDLING”.
So for mpt3sas driver, we may use QUEUE FULL handling of OS. We can
opt to enable “no_device_queue_for_ssd” for mpt3sas driver only if FW
does not expose MPI2_IOCFACTS_CAPABILITY_TASK_SET_FULL_HANDLING.

Thanks,
Sumit

>
> >
> > > At least, Broadcom guys tests this patch on megaraid raid and the
> > > results shows that big improvement was got, that is why the flag is
> > > only set on megaraid host.
> >
> > I do not question that it improves performance. That's not my point.
> >
> > > In theory, .track_queue_depth may only improve sequential IO's
> > > performance for HDD., not very effective for SSD. Or just save a bit
> > > CPU cycles in case of SSD.
> >
> > This is not about performance. This is about how the system behaves when
> > a device is starved for resources or experiencing transient failures.
>
> Could you explain a bit how this patch changes the system beaviror? I
> understand the EH just retries the incompleted requests, which total
> number is just less than host queue depth.
>
>
> Thanks,
> Ming
>

  reply	other threads:[~2020-01-24 12:44 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-19  7:14 [PATCH 0/6] scsi: support bypass device busy check for some high end HBA with SSD Ming Lei
2020-01-19  7:14 ` [PATCH 1/6] scsi: mpt3sas: don't use .device_busy in device reset routine Ming Lei
2020-01-19 20:28   ` Bart Van Assche
2020-01-19  7:14 ` [PATCH 2/6] scsi: remove .for_blk_mq Ming Lei
2020-01-19 20:29   ` Bart Van Assche
2020-01-20 10:17   ` John Garry
2020-01-20 22:12   ` Elliott, Robert (Servers)
2020-01-31  6:30   ` Christoph Hellwig
2020-02-05  2:13     ` Martin K. Petersen
2020-01-19  7:14 ` [PATCH 3/6] scsi: sd: register request queue after sd_revalidate_disk is done Ming Lei
2020-01-19 20:36   ` Bart Van Assche
2020-01-19  7:14 ` [PATCH 4/6] block: freeze queue for updating QUEUE_FLAG_NONROT Ming Lei
2020-01-19 20:40   ` Bart Van Assche
2020-01-19  7:14 ` [PATCH 5/6] scsi: core: don't limit per-LUN queue depth for SSD when HBA needs Ming Lei
2020-01-19 20:58   ` Bart Van Assche
2020-01-21  4:52   ` Martin K. Petersen
2020-01-23  2:54     ` Ming Lei
2020-01-24  1:21       ` Martin K. Petersen
2020-01-24  1:59         ` Ming Lei
2020-01-24 12:43           ` Sumit Saxena [this message]
2020-01-28  4:04             ` Martin K. Petersen
2020-01-24  0:01     ` Sumanesh Samanta
2020-01-24  1:58       ` Martin K. Petersen
2020-01-24 19:41         ` Sumanesh Samanta
2020-01-28  4:22           ` Martin K. Petersen
2020-01-31 11:39             ` Ming Lei
2020-01-19  7:14 ` [PATCH 6/6] scsi: megaraid: set flag of no_device_queue_for_ssd Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAL2rwxrLVeAZmFPGvOOqDrea8Nh3p1Cb5BSd4r4noC_8AzRtHg@mail.gmail.com \
    --to=sumit.saxena@broadcom.com \
    --cc=James.Bottomley@hansenpartnership.com \
    --cc=axboe@kernel.dk \
    --cc=bart.vanassche@wdc.com \
    --cc=chaitra.basappa@broadcom.com \
    --cc=emilne@redhat.com \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=kashyap.desai@broadcom.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=ming.lei@redhat.com \
    --cc=sathya.prakash@broadcom.com \
    --cc=shivasharan.srikanteshwara@broadcom.com \
    --cc=suganath-prabu.subramani@broadcom.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.