From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BA50C3A5A9 for ; Tue, 5 May 2020 02:10:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 06929206D7 for ; Tue, 5 May 2020 02:10:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="FWyU0twj" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727849AbgEECKw (ORCPT ); Mon, 4 May 2020 22:10:52 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:54408 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726549AbgEECKw (ORCPT ); Mon, 4 May 2020 22:10:52 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1588644650; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zufY7QTNl9HWSY4k0rzRU60ccf95EraDuMMUrJtDjhc=; b=FWyU0twjuN4BpCbzy4agzcCjgDztubYjeoFF7zrisj1bNe9dOS7k6gRS8S2bYZlt9u00Kl SDicF1R/tSIY5pwkUQWp4Qr5seCauRVQZTIBCPlxa0PlHi8rLZVdBRbWfTXa9XPX/lOfK/ 7MI3jUUDjvQyRBLZoJsi7QRxsMPLEvw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-321-IDTpmjJqMIawbgrv6cNnIg-1; Mon, 04 May 2020 22:10:45 -0400 X-MC-Unique: IDTpmjJqMIawbgrv6cNnIg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id AAD9A107ACCA; Tue, 5 May 2020 02:10:42 +0000 (UTC) Received: from localhost (ovpn-8-17.pek2.redhat.com [10.72.8.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 365075C1B2; Tue, 5 May 2020 02:10:38 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, Ming Lei , John Garry , Bart Van Assche , Hannes Reinecke , Christoph Hellwig , Thomas Gleixner Subject: [PATCH V10 08/11] block: add blk_end_flush_machinery Date: Tue, 5 May 2020 10:09:27 +0800 Message-Id: <20200505020930.1146281-9-ming.lei@redhat.com> In-Reply-To: <20200505020930.1146281-1-ming.lei@redhat.com> References: <20200505020930.1146281-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Content-Transfer-Encoding: quoted-printable Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Flush requests aren't same with normal FS request: 1) one dedicated per-hctx flush rq is pre-allocated for sending flush req= uest 2) flush request si issued to hardware via one machinary so that flush me= rge can be applied We can't simply re-submit flush rqs via blk_steal_bios(), so add blk_end_flush_machinery to collect flush requests which needs to be resubmitted: - if one flush command without DATA is enough, send one flush, complete t= his kind of requests - otherwise, add the request into a list and let caller re-submit it. Cc: John Garry Cc: Bart Van Assche Cc: Hannes Reinecke Cc: Christoph Hellwig Cc: Thomas Gleixner Reviewed-by: Christoph Hellwig Tested-by: John Garry Signed-off-by: Ming Lei --- block/blk-flush.c | 123 +++++++++++++++++++++++++++++++++++++++++++--- block/blk.h | 4 ++ 2 files changed, 120 insertions(+), 7 deletions(-) diff --git a/block/blk-flush.c b/block/blk-flush.c index 977edf95d711..745d878697ed 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -170,10 +170,11 @@ static void blk_flush_complete_seq(struct request *= rq, unsigned int cmd_flags; =20 BUG_ON(rq->flush.seq & seq); - rq->flush.seq |=3D seq; + if (!error) + rq->flush.seq |=3D seq; cmd_flags =3D rq->cmd_flags; =20 - if (likely(!error)) + if (likely(!error && !fq->flush_queue_terminating)) seq =3D blk_flush_cur_seq(rq); else seq =3D REQ_FSEQ_DONE; @@ -200,9 +201,15 @@ static void blk_flush_complete_seq(struct request *r= q, * normal completion and end it. */ BUG_ON(!list_empty(&rq->queuelist)); - list_del_init(&rq->flush.list); - blk_flush_restore_request(rq); - blk_mq_end_request(rq, error); + + /* Terminating code will end the request from flush queue */ + if (likely(!fq->flush_queue_terminating)) { + list_del_init(&rq->flush.list); + blk_flush_restore_request(rq); + blk_mq_end_request(rq, error); + } else { + list_move_tail(&rq->flush.list, pending); + } break; =20 default: @@ -279,7 +286,8 @@ static void blk_kick_flush(struct request_queue *q, s= truct blk_flush_queue *fq, struct request *flush_rq =3D fq->flush_rq; =20 /* C1 described at the top of this file */ - if (fq->flush_pending_idx !=3D fq->flush_running_idx || list_empty(pend= ing)) + if (fq->flush_pending_idx !=3D fq->flush_running_idx || + list_empty(pending) || fq->flush_queue_terminating) return; =20 /* C2 and C3 @@ -331,7 +339,7 @@ static void mq_flush_data_end_io(struct request *rq, = blk_status_t error) struct blk_flush_queue *fq =3D blk_get_flush_queue(q, ctx); =20 if (q->elevator) { - WARN_ON(rq->tag < 0); + WARN_ON(rq->tag < 0 && !fq->flush_queue_terminating); blk_mq_put_driver_tag(rq); } =20 @@ -503,3 +511,104 @@ void blk_free_flush_queue(struct blk_flush_queue *f= q) kfree(fq->flush_rq); kfree(fq); } + +static void __blk_end_queued_flush(struct blk_flush_queue *fq, + unsigned int queue_idx, struct list_head *resubmit_list, + struct list_head *flush_list) +{ + struct list_head *queue =3D &fq->flush_queue[queue_idx]; + struct request *rq, *nxt; + + list_for_each_entry_safe(rq, nxt, queue, flush.list) { + unsigned int seq =3D blk_flush_cur_seq(rq); + + list_del_init(&rq->flush.list); + blk_flush_restore_request(rq); + if (!blk_rq_sectors(rq) || seq =3D=3D REQ_FSEQ_POSTFLUSH ) + list_add_tail(&rq->queuelist, flush_list); + else + list_add_tail(&rq->queuelist, resubmit_list); + } +} + +static void blk_end_queued_flush(struct blk_flush_queue *fq, + struct list_head *resubmit_list, struct list_head *flush_list) +{ + unsigned long flags; + + spin_lock_irqsave(&fq->mq_flush_lock, flags); + __blk_end_queued_flush(fq, 0, resubmit_list, flush_list); + __blk_end_queued_flush(fq, 1, resubmit_list, flush_list); + spin_unlock_irqrestore(&fq->mq_flush_lock, flags); +} + +/* complete requests which just requires one flush command */ +static void blk_complete_flush_requests(struct blk_flush_queue *fq, + struct list_head *flush_list) +{ + struct block_device *bdev; + struct request *rq; + int error =3D -ENXIO; + + if (list_empty(flush_list)) + return; + + rq =3D list_first_entry(flush_list, struct request, queuelist); + + /* Send flush via one active hctx so we can move on */ + bdev =3D bdget_disk(rq->rq_disk, 0); + if (bdev) { + error =3D blkdev_issue_flush(bdev, GFP_KERNEL, NULL); + bdput(bdev); + } + + while (!list_empty(flush_list)) { + rq =3D list_first_entry(flush_list, struct request, queuelist); + list_del_init(&rq->queuelist); + blk_mq_end_request(rq, error); + } +} + +/* + * Called when this hctx is inactive and all CPUs of this hctx is dead, + * otherwise don't reuse this function. + * + * Terminate this hw queue's flush machinery, and try to complete flush + * IO requests if possible, such as any flush IO without data, or flush + * data IO in POSTFLUSH stage. Otherwise, add the flush IOs into @list + * and let caller to re-submit them. + */ +void blk_end_flush_machinery(struct blk_mq_hw_ctx *hctx, + struct list_head *in, struct list_head *out) +{ + LIST_HEAD(resubmit_list); + LIST_HEAD(flush_list); + struct blk_flush_queue *fq =3D hctx->fq; + struct request *rq, *nxt; + unsigned long flags; + + spin_lock_irqsave(&fq->mq_flush_lock, flags); + fq->flush_queue_terminating =3D 1; + spin_unlock_irqrestore(&fq->mq_flush_lock, flags); + + /* End inflight flush requests */ + list_for_each_entry_safe(rq, nxt, in, queuelist) { + WARN_ON(!(rq->rq_flags & RQF_FLUSH_SEQ)); + list_del_init(&rq->queuelist); + rq->end_io(rq, BLK_STS_AGAIN); + } + + /* End queued requests */ + blk_end_queued_flush(fq, &resubmit_list, &flush_list); + + /* Send flush and complete requests which just need one flush req */ + blk_complete_flush_requests(fq, &flush_list); + + spin_lock_irqsave(&fq->mq_flush_lock, flags); + /* reset flush queue so that it is ready to work next time */ + fq->flush_pending_idx =3D fq->flush_running_idx =3D 0; + fq->flush_queue_terminating =3D 0; + spin_unlock_irqrestore(&fq->mq_flush_lock, flags); + + list_splice_init(&resubmit_list, out); +} diff --git a/block/blk.h b/block/blk.h index 591cc07e40f9..133fb0b99759 100644 --- a/block/blk.h +++ b/block/blk.h @@ -20,6 +20,7 @@ struct blk_flush_queue { unsigned int flush_queue_delayed:1; unsigned int flush_pending_idx:1; unsigned int flush_running_idx:1; + unsigned int flush_queue_terminating:1; blk_status_t rq_status; unsigned long flush_pending_since; struct list_head flush_queue[2]; @@ -454,4 +455,7 @@ int __bio_add_pc_page(struct request_queue *q, struct= bio *bio, struct page *page, unsigned int len, unsigned int offset, bool *same_page); =20 +void blk_end_flush_machinery(struct blk_mq_hw_ctx *hctx, + struct list_head *in, struct list_head *out); + #endif /* BLK_INTERNAL_H */ --=20 2.25.2