All of lore.kernel.org
 help / color / mirror / Atom feed
* [dm-devel] [Question] Why queue_work unneeded for REQUEUE bio
@ 2020-11-03  9:23 Jeffle Xu
  2020-11-03 13:48 ` [dm-devel] " Mike Snitzer
  2020-11-06  3:49 ` [dm-devel] [Question] " JeffleXu
  0 siblings, 2 replies; 6+ messages in thread
From: Jeffle Xu @ 2020-11-03  9:23 UTC (permalink / raw)
  To: snitzer; +Cc: joseph.qi, dm-devel

Hi Mike,

Why queue_work() is unnecessary here for bio with BLK_STS_DM_REQUEUE
returned?

Thanks
Jeffle Xu

---
 drivers/md/dm.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index c18fc2548518..ae550daa99b5 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -908,9 +908,11 @@ static void dec_pending(struct dm_io *io, blk_status_t error)
 			 * Target requested pushing back the I/O.
 			 */
 			spin_lock_irqsave(&md->deferred_lock, flags);
-			if (__noflush_suspending(md))
+			if (__noflush_suspending(md)) {
 				/* NOTE early return due to BLK_STS_DM_REQUEUE below */
 				bio_list_add_head(&md->deferred, io->orig_bio);
+				queue_work(md->wq, &md->work);
+			}
 			else
 				/* noflush suspend was interrupted. */
 				io->status = BLK_STS_IOERR;
-- 
2.27.0

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [dm-devel] Why queue_work unneeded for REQUEUE bio
  2020-11-03  9:23 [dm-devel] [Question] Why queue_work unneeded for REQUEUE bio Jeffle Xu
@ 2020-11-03 13:48 ` Mike Snitzer
  2020-11-04  1:34   ` JeffleXu
  2020-11-06  3:49 ` [dm-devel] [Question] " JeffleXu
  1 sibling, 1 reply; 6+ messages in thread
From: Mike Snitzer @ 2020-11-03 13:48 UTC (permalink / raw)
  To: Jeffle Xu; +Cc: joseph.qi, dm-devel

On Tue, Nov 03 2020 at  4:23am -0500,
Jeffle Xu <jefflexu@linux.alibaba.com> wrote:

> Hi Mike,
> 
> Why queue_work() is unnecessary here for bio with BLK_STS_DM_REQUEUE
> returned?
> 
> Thanks
> Jeffle Xu
> 
> ---
>  drivers/md/dm.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/md/dm.c b/drivers/md/dm.c
> index c18fc2548518..ae550daa99b5 100644
> --- a/drivers/md/dm.c
> +++ b/drivers/md/dm.c
> @@ -908,9 +908,11 @@ static void dec_pending(struct dm_io *io, blk_status_t error)
>  			 * Target requested pushing back the I/O.
>  			 */
>  			spin_lock_irqsave(&md->deferred_lock, flags);
> -			if (__noflush_suspending(md))
> +			if (__noflush_suspending(md)) {
>  				/* NOTE early return due to BLK_STS_DM_REQUEUE below */
>  				bio_list_add_head(&md->deferred, io->orig_bio);
> +				queue_work(md->wq, &md->work);
> +			}
>  			else
>  				/* noflush suspend was interrupted. */
>  				io->status = BLK_STS_IOERR;
> -- 
> 2.27.0
> 

For the case you highlighted (BLK_STS_DM_REQUEUE + __noflush_suspending)
I think the missing queue_work is because we're actively dealing with
the fact that we do _not_ want to flush IO.  SO kicking the workqueue
there isn't helpful because it just processes work that cannot be issued
yet -- the workqueue will be kicked upon resume (see __dm_resume's
dm_queue_flush).

Mike

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dm-devel] Why queue_work unneeded for REQUEUE bio
  2020-11-03 13:48 ` [dm-devel] " Mike Snitzer
@ 2020-11-04  1:34   ` JeffleXu
  0 siblings, 0 replies; 6+ messages in thread
From: JeffleXu @ 2020-11-04  1:34 UTC (permalink / raw)
  To: Mike Snitzer; +Cc: joseph.qi, dm-devel


On 11/3/20 9:48 PM, Mike Snitzer wrote:
> On Tue, Nov 03 2020 at  4:23am -0500,
> Jeffle Xu <jefflexu@linux.alibaba.com> wrote:
>
>> Hi Mike,
>>
>> Why queue_work() is unnecessary here for bio with BLK_STS_DM_REQUEUE
>> returned?
>>
>> Thanks
>> Jeffle Xu
>>
>> ---
>>   drivers/md/dm.c | 4 +++-
>>   1 file changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/md/dm.c b/drivers/md/dm.c
>> index c18fc2548518..ae550daa99b5 100644
>> --- a/drivers/md/dm.c
>> +++ b/drivers/md/dm.c
>> @@ -908,9 +908,11 @@ static void dec_pending(struct dm_io *io, blk_status_t error)
>>   			 * Target requested pushing back the I/O.
>>   			 */
>>   			spin_lock_irqsave(&md->deferred_lock, flags);
>> -			if (__noflush_suspending(md))
>> +			if (__noflush_suspending(md)) {
>>   				/* NOTE early return due to BLK_STS_DM_REQUEUE below */
>>   				bio_list_add_head(&md->deferred, io->orig_bio);
>> +				queue_work(md->wq, &md->work);
>> +			}
>>   			else
>>   				/* noflush suspend was interrupted. */
>>   				io->status = BLK_STS_IOERR;
>> -- 
>> 2.27.0
>>
> For the case you highlighted (BLK_STS_DM_REQUEUE + __noflush_suspending)
> I think the missing queue_work is because we're actively dealing with
> the fact that we do _not_ want to flush IO.  SO kicking the workqueue
> there isn't helpful because it just processes work that cannot be issued
> yet -- the workqueue will be kicked upon resume (see __dm_resume's
> dm_queue_flush).


Got it. Thanks.

If we are in process of DMF_NOFLUSH_SUSPENDING, then the BLK_STS_DM_REQUEUE

may be impacted by the suspending, then re-enqueue the bio to @deferred 
list. Or just pop

out BLK_STS_IOERR error.


-- 
Jeffle
Thanks

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dm-devel] [Question] Why queue_work unneeded for REQUEUE bio
  2020-11-03  9:23 [dm-devel] [Question] Why queue_work unneeded for REQUEUE bio Jeffle Xu
  2020-11-03 13:48 ` [dm-devel] " Mike Snitzer
@ 2020-11-06  3:49 ` JeffleXu
  2020-11-06 15:21   ` [dm-devel] " Mike Snitzer
  1 sibling, 1 reply; 6+ messages in thread
From: JeffleXu @ 2020-11-06  3:49 UTC (permalink / raw)
  To: snitzer; +Cc: joseph.qi, dm-devel

Hi Mike,


I have another question about dm, though it's irrelevant to this original

mail.


Currently abnormal IO will call blk_queue_split() and normal IO will

be split considering @max_sectors/@chunk_sectos (in max_io_len()).


Question 1: Why bio should be split considering queue_limits in dm layer?

After all the underlying device will split themselves by queue_limits if

the dm layer doesn't split by queue_limits.


Then Question 2: Currently only @max_sectors is considered when splitting

normal IO in dm layer. Should we also consider 
@max_segments/@max_segment_size

as blk_queue_split() does?


Thanks,

Jeffle


On 11/3/20 5:23 PM, Jeffle Xu wrote:
> Hi Mike,
>
> Why queue_work() is unnecessary here for bio with BLK_STS_DM_REQUEUE
> returned?
>
> Thanks
> Jeffle Xu
>
> ---
>   drivers/md/dm.c | 4 +++-
>   1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/md/dm.c b/drivers/md/dm.c
> index c18fc2548518..ae550daa99b5 100644
> --- a/drivers/md/dm.c
> +++ b/drivers/md/dm.c
> @@ -908,9 +908,11 @@ static void dec_pending(struct dm_io *io, blk_status_t error)
>   			 * Target requested pushing back the I/O.
>   			 */
>   			spin_lock_irqsave(&md->deferred_lock, flags);
> -			if (__noflush_suspending(md))
> +			if (__noflush_suspending(md)) {
>   				/* NOTE early return due to BLK_STS_DM_REQUEUE below */
>   				bio_list_add_head(&md->deferred, io->orig_bio);
> +				queue_work(md->wq, &md->work);
> +			}
>   			else
>   				/* noflush suspend was interrupted. */
>   				io->status = BLK_STS_IOERR;

-- 
Thanks,
Jeffle

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dm-devel] Why queue_work unneeded for REQUEUE bio
  2020-11-06  3:49 ` [dm-devel] [Question] " JeffleXu
@ 2020-11-06 15:21   ` Mike Snitzer
  2020-11-07  8:12     ` JeffleXu
  0 siblings, 1 reply; 6+ messages in thread
From: Mike Snitzer @ 2020-11-06 15:21 UTC (permalink / raw)
  To: JeffleXu; +Cc: joseph.qi, dm-devel

On Thu, Nov 05 2020 at 10:49pm -0500,
JeffleXu <jefflexu@linux.alibaba.com> wrote:

> Hi Mike,
> 
> 
> I have another question about dm, though it's irrelevant to this original
> 
> mail.
> 
> 
> Currently abnormal IO will call blk_queue_split() and normal IO will
> 
> be split considering @max_sectors/@chunk_sectos (in max_io_len()).
> 
> 
> Question 1: Why bio should be split considering queue_limits in dm layer?
> 
> After all the underlying device will split themselves by queue_limits if
> 
> the dm layer doesn't split by queue_limits.

Some targets have "abnormal IO" constraints in their implementation that
is reflected in the queue_limits -- discards in particular.

> Then Question 2: Currently only @max_sectors is considered when splitting
> 
> normal IO in dm layer. Should we also consider
> @max_segments/@max_segment_size
> 
> as blk_queue_split() does?

Great question, it does appear the one gap in DM's splitting for
"normal" IO. I'm less familiar with @max_segments/@max_segment_size. 

Since commit 5091cdec56fa ("dm: change max_io_len() to use
blk_max_size_offset()") DM is making use of more block core code to
calculate its splits -- the offset based splitting is much more natural 
for DM to perform given that potential for spanning multiple targets,
etc.  But DM targets really don't get involved with concern for
@max_segments/@max_segment_size

dm-crypt.c:crypt_io_hints is the only DM target code that concerns
itself with @max_segment_size -- and it is punitive by setting it to
PAGE_SIZE, please see commit 586b286b110e94e ("dm crypt: constrain crypt
device's max_segment_size to PAGE_SIZE") for more context.

Mike


> On 11/3/20 5:23 PM, Jeffle Xu wrote:
> >Hi Mike,
> >
> >Why queue_work() is unnecessary here for bio with BLK_STS_DM_REQUEUE
> >returned?
> >
> >Thanks
> >Jeffle Xu
> >
> >---
> >  drivers/md/dm.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> >
> >diff --git a/drivers/md/dm.c b/drivers/md/dm.c
> >index c18fc2548518..ae550daa99b5 100644
> >--- a/drivers/md/dm.c
> >+++ b/drivers/md/dm.c
> >@@ -908,9 +908,11 @@ static void dec_pending(struct dm_io *io, blk_status_t error)
> >  			 * Target requested pushing back the I/O.
> >  			 */
> >  			spin_lock_irqsave(&md->deferred_lock, flags);
> >-			if (__noflush_suspending(md))
> >+			if (__noflush_suspending(md)) {
> >  				/* NOTE early return due to BLK_STS_DM_REQUEUE below */
> >  				bio_list_add_head(&md->deferred, io->orig_bio);
> >+				queue_work(md->wq, &md->work);
> >+			}
> >  			else
> >  				/* noflush suspend was interrupted. */
> >  				io->status = BLK_STS_IOERR;
> 
> -- 
> Thanks,
> Jeffle
> 

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dm-devel] Why queue_work unneeded for REQUEUE bio
  2020-11-06 15:21   ` [dm-devel] " Mike Snitzer
@ 2020-11-07  8:12     ` JeffleXu
  0 siblings, 0 replies; 6+ messages in thread
From: JeffleXu @ 2020-11-07  8:12 UTC (permalink / raw)
  To: Mike Snitzer; +Cc: joseph.qi, dm-devel


On 11/6/20 11:21 PM, Mike Snitzer wrote:
> On Thu, Nov 05 2020 at 10:49pm -0500,
> JeffleXu <jefflexu@linux.alibaba.com> wrote:
>
>> Hi Mike,
>>
>>
>> I have another question about dm, though it's irrelevant to this original
>>
>> mail.
>>
>>
>> Currently abnormal IO will call blk_queue_split() and normal IO will
>>
>> be split considering @max_sectors/@chunk_sectos (in max_io_len()).
>>
>>
>> Question 1: Why bio should be split considering queue_limits in dm layer?
>>
>> After all the underlying device will split themselves by queue_limits if
>>
>> the dm layer doesn't split by queue_limits.
> Some targets have "abnormal IO" constraints in their implementation that
> is reflected in the queue_limits -- discards in particular.
>
>> Then Question 2: Currently only @max_sectors is considered when splitting
>>
>> normal IO in dm layer. Should we also consider
>> @max_segments/@max_segment_size
>>
>> as blk_queue_split() does?
> Great question, it does appear the one gap in DM's splitting for
> "normal" IO. I'm less familiar with @max_segments/@max_segment_size.
>
> Since commit 5091cdec56fa ("dm: change max_io_len() to use
> blk_max_size_offset()") DM is making use of more block core code to
> calculate its splits -- the offset based splitting is much more natural
> for DM to perform given that potential for spanning multiple targets,
> etc.  But DM targets really don't get involved with concern for
> @max_segments/@max_segment_size
>
> dm-crypt.c:crypt_io_hints is the only DM target code that concerns
> itself with @max_segment_size -- and it is punitive by setting it to
> PAGE_SIZE, please see commit 586b286b110e94e ("dm crypt: constrain crypt
> device's max_segment_size to PAGE_SIZE") for more context.

Thanks. So the principle of handling queue_limits of dm layer is that,  
only when dm-target

specific queue_limits is set, shall we consider queue_limitswhen 
splitting bio in dm, in

which case these queue_limits (from underlying devices) are left for the 
underlying devices

to be handled appropriately.


(In this case there's no dm-target set its own 
@max_segments/@max_segment_size, then

these two parameters are not considered when dm splitting.)


If that's the case, then why bother stacking queue_limits in 
blk_stack_limits()? Such as

```

blk_stack_limits

     t->max_segments = min_not_zero(t->max_segments, b->max_segments);
     t->max_segment_size = min_not_zero(t->max_segment_size,
                        b->max_segment_size);

```


I'm not challenging the implementation. Of course it works fine 
currently. I'm just curious

about the design consideration beneath the implementation. :-)


-- 
Thanks,
Jeffle

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-11-08 17:05 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-03  9:23 [dm-devel] [Question] Why queue_work unneeded for REQUEUE bio Jeffle Xu
2020-11-03 13:48 ` [dm-devel] " Mike Snitzer
2020-11-04  1:34   ` JeffleXu
2020-11-06  3:49 ` [dm-devel] [Question] " JeffleXu
2020-11-06 15:21   ` [dm-devel] " Mike Snitzer
2020-11-07  8:12     ` JeffleXu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.