linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] blk-mq: ensure hctx to be ran on mapped cpu when issue directly
@ 2018-10-24 15:20 Jianchao Wang
  2018-10-25 16:25 ` Jens Axboe
  0 siblings, 1 reply; 4+ messages in thread
From: Jianchao Wang @ 2018-10-24 15:20 UTC (permalink / raw)
  To: axboe; +Cc: ming.lei, linux-block, linux-kernel

When issue request directly and the task is migrated out of the
original cpu where it allocates request, hctx could be ran on
the cpu where it is not mapped. To fix this, insert the request
if BLK_MQ_F_BLOCKING is set, check whether the current is mapped
to the hctx and invoke __blk_mq_issue_directly under preemption
disabled.

Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
---
 block/blk-mq.c | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index e3c39ea..0cdc306 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1717,6 +1717,12 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 {
 	struct request_queue *q = rq->q;
 	bool run_queue = true;
+	blk_status_t ret;
+
+	if (hctx->flags & BLK_MQ_F_BLOCKING) {
+		bypass_insert = false;
+		goto insert;
+	}
 
 	/*
 	 * RCU or SRCU read lock is needed before checking quiesced flag.
@@ -1734,6 +1740,11 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 	if (q->elevator && !bypass_insert)
 		goto insert;
 
+	if (!cpumask_test_cpu(get_cpu(), hctx->cpumask)) {
+		bypass_insert = false;
+		goto insert;
+	}
+
 	if (!blk_mq_get_dispatch_budget(hctx))
 		goto insert;
 
@@ -1742,8 +1753,12 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 		goto insert;
 	}
 
-	return __blk_mq_issue_directly(hctx, rq, cookie);
+	ret = __blk_mq_issue_directly(hctx, rq, cookie);
+	put_cpu();
+	return ret;
+
 insert:
+	put_cpu();
 	if (bypass_insert)
 		return BLK_STS_RESOURCE;
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] blk-mq: ensure hctx to be ran on mapped cpu when issue directly
  2018-10-24 15:20 [PATCH] blk-mq: ensure hctx to be ran on mapped cpu when issue directly Jianchao Wang
@ 2018-10-25 16:25 ` Jens Axboe
  2018-10-26  1:38   ` jianchao.wang
  0 siblings, 1 reply; 4+ messages in thread
From: Jens Axboe @ 2018-10-25 16:25 UTC (permalink / raw)
  To: Jianchao Wang; +Cc: ming.lei, linux-block, linux-kernel

On 10/24/18 9:20 AM, Jianchao Wang wrote:
> When issue request directly and the task is migrated out of the
> original cpu where it allocates request, hctx could be ran on
> the cpu where it is not mapped. To fix this, insert the request
> if BLK_MQ_F_BLOCKING is set, check whether the current is mapped
> to the hctx and invoke __blk_mq_issue_directly under preemption
> disabled.
> 
> Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
> ---
>  block/blk-mq.c | 17 ++++++++++++++++-
>  1 file changed, 16 insertions(+), 1 deletion(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index e3c39ea..0cdc306 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1717,6 +1717,12 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
>  {
>  	struct request_queue *q = rq->q;
>  	bool run_queue = true;
> +	blk_status_t ret;
> +
> +	if (hctx->flags & BLK_MQ_F_BLOCKING) {
> +		bypass_insert = false;
> +		goto insert;
> +	}

I'd do a prep patch that moves the insert logic out of this function,
and just have the caller do it by return BLK_STS_RESOURCE, for instance.
It's silly that we have that in both the caller and inside this function.

> @@ -1734,6 +1740,11 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
>  	if (q->elevator && !bypass_insert)
>  		goto insert;
>  
> +	if (!cpumask_test_cpu(get_cpu(), hctx->cpumask)) {
> +		bypass_insert = false;
> +		goto insert;
> +	}

Should be fine to just do smp_processor_id() here, as we're inside
hctx_lock() here.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] blk-mq: ensure hctx to be ran on mapped cpu when issue directly
  2018-10-25 16:25 ` Jens Axboe
@ 2018-10-26  1:38   ` jianchao.wang
  2018-10-26  1:39     ` Jens Axboe
  0 siblings, 1 reply; 4+ messages in thread
From: jianchao.wang @ 2018-10-26  1:38 UTC (permalink / raw)
  To: Jens Axboe; +Cc: ming.lei, linux-block, linux-kernel

Hi Jens

On 10/26/18 12:25 AM, Jens Axboe wrote:
> On 10/24/18 9:20 AM, Jianchao Wang wrote:
>> When issue request directly and the task is migrated out of the
>> original cpu where it allocates request, hctx could be ran on
>> the cpu where it is not mapped. To fix this, insert the request
>> if BLK_MQ_F_BLOCKING is set, check whether the current is mapped
>> to the hctx and invoke __blk_mq_issue_directly under preemption
>> disabled.
>>
>> Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
>> ---
>>  block/blk-mq.c | 17 ++++++++++++++++-
>>  1 file changed, 16 insertions(+), 1 deletion(-)
>>
>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>> index e3c39ea..0cdc306 100644
>> --- a/block/blk-mq.c
>> +++ b/block/blk-mq.c
>> @@ -1717,6 +1717,12 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
>>  {
>>  	struct request_queue *q = rq->q;
>>  	bool run_queue = true;
>> +	blk_status_t ret;
>> +
>> +	if (hctx->flags & BLK_MQ_F_BLOCKING) {
>> +		bypass_insert = false;
>> +		goto insert;
>> +	}
> 
> I'd do a prep patch that moves the insert logic out of this function,
> and just have the caller do it by return BLK_STS_RESOURCE, for instance.
> It's silly that we have that in both the caller and inside this function.

Yes.

> 
>> @@ -1734,6 +1740,11 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
>>  	if (q->elevator && !bypass_insert)
>>  		goto insert;
>>  
>> +	if (!cpumask_test_cpu(get_cpu(), hctx->cpumask)) {
>> +		bypass_insert = false;
>> +		goto insert;
>> +	}
> 
> Should be fine to just do smp_processor_id() here, as we're inside
> hctx_lock() here.
> 

If the rcu is preemptible, smp_processor_id will not enough here.

Thanks
Jianchao

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] blk-mq: ensure hctx to be ran on mapped cpu when issue directly
  2018-10-26  1:38   ` jianchao.wang
@ 2018-10-26  1:39     ` Jens Axboe
  0 siblings, 0 replies; 4+ messages in thread
From: Jens Axboe @ 2018-10-26  1:39 UTC (permalink / raw)
  To: jianchao.wang; +Cc: ming.lei, linux-block, linux-kernel

On 10/25/18 7:38 PM, jianchao.wang wrote:
> Hi Jens
> 
> On 10/26/18 12:25 AM, Jens Axboe wrote:
>> On 10/24/18 9:20 AM, Jianchao Wang wrote:
>>> When issue request directly and the task is migrated out of the
>>> original cpu where it allocates request, hctx could be ran on
>>> the cpu where it is not mapped. To fix this, insert the request
>>> if BLK_MQ_F_BLOCKING is set, check whether the current is mapped
>>> to the hctx and invoke __blk_mq_issue_directly under preemption
>>> disabled.
>>>
>>> Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
>>> ---
>>>  block/blk-mq.c | 17 ++++++++++++++++-
>>>  1 file changed, 16 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>>> index e3c39ea..0cdc306 100644
>>> --- a/block/blk-mq.c
>>> +++ b/block/blk-mq.c
>>> @@ -1717,6 +1717,12 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
>>>  {
>>>  	struct request_queue *q = rq->q;
>>>  	bool run_queue = true;
>>> +	blk_status_t ret;
>>> +
>>> +	if (hctx->flags & BLK_MQ_F_BLOCKING) {
>>> +		bypass_insert = false;
>>> +		goto insert;
>>> +	}
>>
>> I'd do a prep patch that moves the insert logic out of this function,
>> and just have the caller do it by return BLK_STS_RESOURCE, for instance.
>> It's silly that we have that in both the caller and inside this function.
> 
> Yes.
> 
>>
>>> @@ -1734,6 +1740,11 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
>>>  	if (q->elevator && !bypass_insert)
>>>  		goto insert;
>>>  
>>> +	if (!cpumask_test_cpu(get_cpu(), hctx->cpumask)) {
>>> +		bypass_insert = false;
>>> +		goto insert;
>>> +	}
>>
>> Should be fine to just do smp_processor_id() here, as we're inside
>> hctx_lock() here.
>>
> 
> If the rcu is preemptible, smp_processor_id will not enough here.

True, for some reason I keep forgetting that rcu_*_lock() doesn't
imply preempt_disable() anymore.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-10-26  1:39 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-24 15:20 [PATCH] blk-mq: ensure hctx to be ran on mapped cpu when issue directly Jianchao Wang
2018-10-25 16:25 ` Jens Axboe
2018-10-26  1:38   ` jianchao.wang
2018-10-26  1:39     ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).