linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Question about ufs_bsg
       [not found] <CGME20210914055907epcas2p1c9d5d91c61592a67b9ce4b2a88d8f279@epcas2p1.samsung.com>
@ 2021-09-14  5:59 ` Kiwoong Kim
  2021-09-14  6:39   ` Avri Altman
  0 siblings, 1 reply; 3+ messages in thread
From: Kiwoong Kim @ 2021-09-14  5:59 UTC (permalink / raw)
  To: linux-scsi, linux-kernel, alim.akhtar, avri.altman, jejb,
	martin.petersen, beanhuo, cang, adrian.hunter, sc.suh, hy50.seo,
	sh425.lee, bhoon95.kim

Hi,

ufs_bsg was introduced nearly three years ago and it allocates its own request queue.
I faced a sytmpom with this and want to ask something about it.

That is, sometimes queue depth for ufs is limited to half of the its maximum value
even in a situation with many IO requests from filesystem.
It turned out that it only occurs when a query is being processed at the same time.
Regarding my tracing, when the query process starts, users for the hctx that represents
a ufs host increase to two and with this, some pathes calling 'hctx_may_queue'
function in blk-mq seems to throttle dispatches, technically with 16 because the number of
ufs slots (32 in my case) is dividend by two (users).

I found that it happened when a query for write booster is processed
because write booster only turns on in some conditions in my base that is different
from kernel mainline. But when an exceptional event or others that could lead to a query occurs,
it can happen even in mainline.

I think the throttling is a little bit excessive,
so the question: is there any way to assign queue depth per user on an asymmetric basis?

Thanks.
Kiwoong Kim



^ permalink raw reply	[flat|nested] 3+ messages in thread

* RE: Question about ufs_bsg
  2021-09-14  5:59 ` Question about ufs_bsg Kiwoong Kim
@ 2021-09-14  6:39   ` Avri Altman
  2021-09-14  6:45     ` Kiwoong Kim
  0 siblings, 1 reply; 3+ messages in thread
From: Avri Altman @ 2021-09-14  6:39 UTC (permalink / raw)
  To: Kiwoong Kim, linux-scsi, linux-kernel, alim.akhtar, jejb,
	martin.petersen, beanhuo, cang, adrian.hunter, sc.suh, hy50.seo,
	sh425.lee, bhoon95.kim

Hi,

> Hi,
> 
> ufs_bsg was introduced nearly three years ago and it allocates its own request
> queue.
> I faced a sytmpom with this and want to ask something about it.
> 
> That is, sometimes queue depth for ufs is limited to half of the its maximum
> value
> even in a situation with many IO requests from filesystem.
This is interesting indeed. Before going further with investigating this,
Could you share some more details on your setup:
The bsg node it creates was originally meant to convey a single query request via SG_IO ioctl,
Which is blocking.
 - How do you create many IO requests queueing on that request queue?
 - command upiu is not implemented, are all those IOs are query requests?

Thanks,
Avri

> It turned out that it only occurs when a query is being processed at the same
> time.
> Regarding my tracing, when the query process starts, users for the hctx that
> represents
> a ufs host increase to two and with this, some pathes calling 'hctx_may_queue'
> function in blk-mq seems to throttle dispatches, technically with 16 because the
> number of
> ufs slots (32 in my case) is dividend by two (users).
> 
> I found that it happened when a query for write booster is processed
> because write booster only turns on in some conditions in my base that is
> different
> from kernel mainline. But when an exceptional event or others that could lead
> to a query occurs,
> it can happen even in mainline.
> 
> I think the throttling is a little bit excessive,
> so the question: is there any way to assign queue depth per user on an
> asymmetric basis?
> 
> Thanks.
> Kiwoong Kim
> 


^ permalink raw reply	[flat|nested] 3+ messages in thread

* RE: Question about ufs_bsg
  2021-09-14  6:39   ` Avri Altman
@ 2021-09-14  6:45     ` Kiwoong Kim
  0 siblings, 0 replies; 3+ messages in thread
From: Kiwoong Kim @ 2021-09-14  6:45 UTC (permalink / raw)
  To: 'Avri Altman',
	linux-scsi, linux-kernel, alim.akhtar, jejb, martin.petersen,
	beanhuo, cang, adrian.hunter, sc.suh, hy50.seo, sh425.lee,
	bhoon95.kim

> Hi,
> 
> > Hi,
> >
> > ufs_bsg was introduced nearly three years ago and it allocates its own
> > request queue.
> > I faced a sytmpom with this and want to ask something about it.
> >
> > That is, sometimes queue depth for ufs is limited to half of the its
> > maximum value even in a situation with many IO requests from
> > filesystem.
> This is interesting indeed. Before going further with investigating this,

Hi. What I first intended is not ufs_bsg but as you might already know, it also allocated its own request queue.
In that point, we can imagine it could be the same situation.

> Could you share some more details on your setup:
> The bsg node it creates was originally meant to convey a single query
> request via SG_IO ioctl, Which is blocking.
>  - How do you create many IO requests queueing on that request queue?

I used some benchmarks, such tiobench or Androbench that could make heavy IO scenarios.

>  - command upiu is not implemented, are all those IOs are query requests?

What I've seen is just one query and many scsi commands.

> 
> > It turned out that it only occurs when a query is being processed at
> > the same time.
> > Regarding my tracing, when the query process starts, users for the
> > hctx that represents a ufs host increase to two and with this, some
> > pathes calling 'hctx_may_queue'
> > function in blk-mq seems to throttle dispatches, technically with 16
> > because the number of ufs slots (32 in my case) is dividend by two
> > (users).
> >
> > I found that it happened when a query for write booster is processed
> > because write booster only turns on in some conditions in my base that
> > is different from kernel mainline. But when an exceptional event or
> > others that could lead to a query occurs, it can happen even in
> > mainline.
> >
> > I think the throttling is a little bit excessive, so the question: is
> > there any way to assign queue depth per user on an asymmetric basis?
> >
> > Thanks.
> > Kiwoong Kim
> >



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-09-14  6:45 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CGME20210914055907epcas2p1c9d5d91c61592a67b9ce4b2a88d8f279@epcas2p1.samsung.com>
2021-09-14  5:59 ` Question about ufs_bsg Kiwoong Kim
2021-09-14  6:39   ` Avri Altman
2021-09-14  6:45     ` Kiwoong Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).