From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Return-Path: Sender: Tejun Heo From: Tejun Heo To: axboe@kernel.dk Cc: linux-kernel@vger.kernel.org, oleg@redhat.com, peterz@infradead.org, kernel-team@fb.com, osandov@fb.com, linux-block@vger.kernel.org, hch@lst.de, Tejun Heo , "jianchao.wang" Subject: [PATCH 5/6] blk-mq: remove REQ_ATOM_COMPLETE usages from blk-mq Date: Tue, 12 Dec 2017 11:01:33 -0800 Message-Id: <20171212190134.535941-6-tj@kernel.org> In-Reply-To: <20171212190134.535941-1-tj@kernel.org> References: <20171212190134.535941-1-tj@kernel.org> List-ID: After the recent updates to use generation number and state based synchronization, blk-mq no longer depends on REQ_ATOM_COMPLETE for anything. Remove all REQ_ATOM_COMPLETE usages. This removes atomic bitops from hot paths too. v2: Removed blk_clear_rq_complete() from blk_mq_rq_timed_out(). Signed-off-by: Tejun Heo Cc: "jianchao.wang" --- block/blk-mq.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 73d6444..7269552 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -596,14 +596,12 @@ void blk_mq_complete_request(struct request *rq) */ if (!(hctx->flags & BLK_MQ_F_BLOCKING)) { rcu_read_lock(); - if (blk_mq_rq_aborted_gstate(rq) != rq->gstate && - !blk_mark_rq_complete(rq)) + if (blk_mq_rq_aborted_gstate(rq) != rq->gstate) __blk_mq_complete_request(rq); rcu_read_unlock(); } else { srcu_idx = srcu_read_lock(hctx->queue_rq_srcu); - if (blk_mq_rq_aborted_gstate(rq) != rq->gstate && - !blk_mark_rq_complete(rq)) + if (blk_mq_rq_aborted_gstate(rq) != rq->gstate) __blk_mq_complete_request(rq); srcu_read_unlock(hctx->queue_rq_srcu, srcu_idx); } @@ -650,8 +648,6 @@ void blk_mq_start_request(struct request *rq) write_seqcount_end(&rq->gstate_seq); set_bit(REQ_ATOM_STARTED, &rq->atomic_flags); - if (test_bit(REQ_ATOM_COMPLETE, &rq->atomic_flags)) - clear_bit(REQ_ATOM_COMPLETE, &rq->atomic_flags); if (q->dma_drain_size && blk_rq_bytes(rq)) { /* @@ -819,7 +815,6 @@ static void blk_mq_rq_timed_out(struct request *req, bool reserved) req->aborted_gstate = 0; u64_stats_update_end(&req->aborted_gstate_sync); blk_add_timer(req); - blk_clear_rq_complete(req); break; case BLK_EH_NOT_HANDLED: break; @@ -870,8 +865,7 @@ static void blk_mq_terminate_expired(struct blk_mq_hw_ctx *hctx, * now guaranteed to see @rq->aborted_gstate and yield. If * @rq->aborted_gstate still matches @rq->gstate, @rq is ours. */ - if (READ_ONCE(rq->gstate) == rq->aborted_gstate && - !blk_mark_rq_complete(rq)) + if (READ_ONCE(rq->gstate) == rq->aborted_gstate) blk_mq_rq_timed_out(rq, reserved); } -- 2.9.5