linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Kashyap Desai <kashyap.desai@broadcom.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Bart Van Assche <bvanassche@acm.org>,
	linux-scsi@vger.kernel.org,
	"Martin K . Petersen" <martin.petersen@oracle.com>,
	James Bottomley <James.Bottomley@hansenpartnership.com>,
	Jens Axboe <axboe@kernel.dk>,
	"Ewan D . Milne" <emilne@redhat.com>,
	Omar Sandoval <osandov@fb.com>, Christoph Hellwig <hch@lst.de>,
	Hannes Reinecke <hare@suse.de>,
	Laurence Oberman <loberman@redhat.com>,
	Bart Van Assche <bart.vanassche@wdc.com>,
	Sathya Prakash Veerichetty <sathya.prakash@broadcom.com>
Subject: RE: [RFC PATCH V4 2/2] scsi: core: don't limit per-LUN queue depth for SSD
Date: Wed, 23 Oct 2019 13:16:48 +0530	[thread overview]
Message-ID: <1c40066e1f3361f2b6c8f90b4115ad01@mail.gmail.com> (raw)
In-Reply-To: <20191023012838.GB18083@ming.t460p>

V4 2/2] scsi: core: don't limit per-LUN queue depth
> for SSD
>
> On Fri, Oct 18, 2019 at 12:00:07AM +0530, Kashyap Desai wrote:
> > > On 10/9/19 2:32 AM, Ming Lei wrote:
> > > > @@ -354,7 +354,8 @@ void scsi_device_unbusy(struct scsi_device
> > > > *sdev,
> > > struct scsi_cmnd *cmd)
> > > >   	if (starget->can_queue > 0)
> > > >   		atomic_dec(&starget->target_busy);
> > > >
> > > > -	atomic_dec(&sdev->device_busy);
> > > > +	if (!blk_queue_nonrot(sdev->request_queue))
> > > > +		atomic_dec(&sdev->device_busy);
> > > >   }
> > > >
> > >
> > > Hi Ming,
> > >
> > > Does this patch impact the meaning of the queue_depth sysfs
> > > attribute (see also sdev_store_queue_depth()) and also the queue
> > > depth ramp up/down mechanism (see also
> scsi_handle_queue_ramp_up())?
> > > Have you considered to enable/disable busy tracking per LUN
> > > depending on whether or not sdev-
> > > >queue_depth < shost->can_queue?
> > >
> > > The megaraid and mpt3sas drivers read sdev->device_busy directly. Is
> > > the current version of this patch compatible with these drivers?
> >
> > We need to know per scsi device outstanding in mpt3sas and
> > megaraid_sas driver.
>
> Is the READ done in fast path or slow path? If it is on slow path, it
should be
> easy to do via blk_mq_in_flight_rw().

READ is done in fast path.

>
> > Can we get supporting API from block layer (through SML)  ? something
> > similar to "atomic_read(&hctx->nr_active)" which can be derived from
> > sdev->request_queue->hctx ?
> > At least for those driver which is nr_hw_queue = 1, it will be useful
> > and we can avoid sdev->device_busy dependency.
>
> If you mean to add new atomic counter, we just move the .device_busy
into
> blk-mq, that can become new bottleneck.

How about below ? We define and use below API instead of
"atomic_read(&scp->device->device_busy) >" and it is giving expected
value. I have not captured performance impact on max IOPs profile.

Inline unsigned long sdev_nr_inflight_request(struct request_queue *q)
{
        struct blk_mq_hw_ctx *hctx;
        unsigned long nr_requests = 0;
        int i;

        queue_for_each_hw_ctx(q, hctx, i)
                nr_requests += atomic_read(&hctx->nr_active);

        return nr_requests;
}

Kashyap

>
>
> thanks,
> Ming

  reply	other threads:[~2019-10-23  7:46 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-09  9:32 [PATCH V4 0/2] scsi: avoid atomic operations in IO path Ming Lei
2019-10-09  9:32 ` [PATCH V4 1/2] scsi: core: avoid host-wide host_busy counter for scsi_mq Ming Lei
2019-10-09 16:14   ` Bart Van Assche
2019-10-23  8:52   ` John Garry
2019-10-24  0:58     ` Ming Lei
2019-10-24  9:19       ` John Garry
2019-10-24 21:24         ` Ming Lei
2019-10-25  8:58           ` John Garry
2019-10-25  9:43             ` Ming Lei
2019-10-25 10:13               ` John Garry
2019-10-25 21:53                 ` Ming Lei
2019-10-28  9:42                   ` John Garry
2019-10-09  9:32 ` [RFC PATCH V4 2/2] scsi: core: don't limit per-LUN queue depth for SSD Ming Lei
2019-10-09 16:05   ` Bart Van Assche
2019-10-10  0:43     ` Ming Lei
2019-10-17 18:30     ` Kashyap Desai
2019-10-23  1:28       ` Ming Lei
2019-10-23  7:46         ` Kashyap Desai [this message]
2019-10-24  1:09           ` Ming Lei
2019-10-25 10:04             ` Kashyap Desai
2019-10-25 21:58               ` Ming Lei
2019-11-04  9:30                 ` Kashyap Desai
2019-11-05  0:23                   ` Ming Lei
2019-10-23  0:30   ` [scsi] cc2f854c79: suspend_stress.fail kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1c40066e1f3361f2b6c8f90b4115ad01@mail.gmail.com \
    --to=kashyap.desai@broadcom.com \
    --cc=James.Bottomley@hansenpartnership.com \
    --cc=axboe@kernel.dk \
    --cc=bart.vanassche@wdc.com \
    --cc=bvanassche@acm.org \
    --cc=emilne@redhat.com \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=linux-scsi@vger.kernel.org \
    --cc=loberman@redhat.com \
    --cc=martin.petersen@oracle.com \
    --cc=ming.lei@redhat.com \
    --cc=osandov@fb.com \
    --cc=sathya.prakash@broadcom.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).