From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C1E5C433E0 for ; Wed, 1 Jul 2020 09:00:31 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 451D32074D for ; Wed, 1 Jul 2020 09:00:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="RKMWPwGh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 451D32074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 3A07811487766; Wed, 1 Jul 2020 02:00:29 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=batv+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 06BD31149FF6B for ; Wed, 1 Jul 2020 02:00:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=yZp3ZhmmnZVm/xMn8H8i+YIbZxG0gRGZfGVMIZjaPzw=; b=RKMWPwGhSFWRqAqKPhX/CZ8Tjr 2y9UV+7WeqYw079kzvJhfBLKNUSnR39UjjcWcPFdEXxijNON689BbqBUoAHoa4MINvOmGyGIo8p6D WBJBVcdmy65C0w0U6CrySV4utK7ZZ6HyASSo1xa6KLt62iitIX6XPwRC25GJIHaUvU32tDpJxF/IZ VW63cu8DySq2vkkwy/xd7AUzFnf/7KsER6SY0OiH44xnptYrxnSXEmH3ZsHYzvYnVn7Ql6UghoGbc Rt1XjbpS7dVpsBnv4qKhpvsABOXO94ealDA3iuzJa4NI/JBfp2v9rUYzGuLm+Ykbcfs8xzOsSpYeC ylv9vMYg==; Received: from [2001:4bb8:184:76e3:ea38:596b:3e9e:422a] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqYbJ-0008HS-16; Wed, 01 Jul 2020 09:00:17 +0000 From: Christoph Hellwig To: Jens Axboe Subject: [PATCH 18/20] block: refator submit_bio_noacct Date: Wed, 1 Jul 2020 10:59:45 +0200 Message-Id: <20200701085947.3354405-19-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701085947.3354405-1-hch@lst.de> References: <20200701085947.3354405-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Message-ID-Hash: X5CUA3RFRXLQ4QEE4Q2LUZHVLDRAQJZK X-Message-ID-Hash: X5CUA3RFRXLQ4QEE4Q2LUZHVLDRAQJZK X-MailFrom: BATV+501e1de201b53739768b+6156+infradead.org+hch@casper.srs.infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: dm-devel@redhat.com, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-xtensa@linux-xtensa.org, drbd-dev@lists.linbit.com, linuxppc-dev@lists.ozlabs.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-nvdimm@lists.01.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Split out a __submit_bio_noacct helper for the actual de-recursion algorithm, and simplify the loop by using a continue when we can't enter the queue for a bio. Signed-off-by: Christoph Hellwig --- block/blk-core.c | 143 +++++++++++++++++++++++++---------------------- 1 file changed, 75 insertions(+), 68 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index ff9a88d2d244cb..57b5dc00d44cb1 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1084,6 +1084,74 @@ static blk_qc_t __submit_bio(struct bio *bio) return ret; } +/* + * The loop in this function may be a bit non-obvious, and so deserves some + * explanation: + * + * - Before entering the loop, bio->bi_next is NULL (as all callers ensure + * that), so we have a list with a single bio. + * - We pretend that we have just taken it off a longer list, so we assign + * bio_list to a pointer to the bio_list_on_stack, thus initialising the + * bio_list of new bios to be added. ->submit_bio() may indeed add some more + * bios through a recursive call to submit_bio_noacct. If it did, we find a + * non-NULL value in bio_list and re-enter the loop from the top. + * - In this case we really did just take the bio of the top of the list (no + * pretending) and so remove it from bio_list, and call into ->submit_bio() + * again. + * + * bio_list_on_stack[0] contains bios submitted by the current ->submit_bio. + * bio_list_on_stack[1] contains bios that were submitted before the current + * ->submit_bio_bio, but that haven't been processed yet. + */ +static blk_qc_t __submit_bio_noacct(struct bio *bio) +{ + struct bio_list bio_list_on_stack[2]; + blk_qc_t ret = BLK_QC_T_NONE; + + BUG_ON(bio->bi_next); + + bio_list_init(&bio_list_on_stack[0]); + current->bio_list = bio_list_on_stack; + + do { + struct request_queue *q = bio->bi_disk->queue; + struct bio_list lower, same; + + if (unlikely(bio_queue_enter(bio) != 0)) + continue; + + /* + * Create a fresh bio_list for all subordinate requests. + */ + bio_list_on_stack[1] = bio_list_on_stack[0]; + bio_list_init(&bio_list_on_stack[0]); + + ret = __submit_bio(bio); + + /* + * Sort new bios into those for a lower level and those for the + * same level. + */ + bio_list_init(&lower); + bio_list_init(&same); + while ((bio = bio_list_pop(&bio_list_on_stack[0])) != NULL) + if (q == bio->bi_disk->queue) + bio_list_add(&same, bio); + else + bio_list_add(&lower, bio); + + /* + * Now assemble so we handle the lowest level first. + */ + bio_list_merge(&bio_list_on_stack[0], &lower); + bio_list_merge(&bio_list_on_stack[0], &same); + bio_list_merge(&bio_list_on_stack[0], &bio_list_on_stack[1]); + } while ((bio = bio_list_pop(&bio_list_on_stack[0]))); + + current->bio_list = NULL; + return ret; +} + /** * submit_bio_noacct - re-submit a bio to the block device layer for I/O * @bio: The bio describing the location in memory and on the device. @@ -1095,82 +1163,21 @@ static blk_qc_t __submit_bio(struct bio *bio) */ blk_qc_t submit_bio_noacct(struct bio *bio) { - /* - * bio_list_on_stack[0] contains bios submitted by the current - * ->submit_bio. - * bio_list_on_stack[1] contains bios that were submitted before the - * current ->submit_bio_bio, but that haven't been processed yet. - */ - struct bio_list bio_list_on_stack[2]; - blk_qc_t ret = BLK_QC_T_NONE; - if (!submit_bio_checks(bio)) - goto out; + return BLK_QC_T_NONE; /* - * We only want one ->submit_bio to be active at a time, else - * stack usage with stacked devices could be a problem. So use - * current->bio_list to keep a list of requests submited by a - * ->submit_bio method. current->bio_list is also used as a - * flag to say if submit_bio_noacct is currently active in this - * task or not. If it is NULL, then no make_request is active. If - * it is non-NULL, then a make_request is active, and new requests - * should be added at the tail + * We only want one ->submit_bio to be active at a time, else stack + * usage with stacked devices could be a problem. Use current->bio_list + * to collect a list of requests submited by a ->submit_bio method while + * it is active, and then process them after it returned. */ if (current->bio_list) { bio_list_add(¤t->bio_list[0], bio); - goto out; + return BLK_QC_T_NONE; } - /* following loop may be a bit non-obvious, and so deserves some - * explanation. - * Before entering the loop, bio->bi_next is NULL (as all callers - * ensure that) so we have a list with a single bio. - * We pretend that we have just taken it off a longer list, so - * we assign bio_list to a pointer to the bio_list_on_stack, - * thus initialising the bio_list of new bios to be - * added. ->submit_bio() may indeed add some more bios - * through a recursive call to submit_bio_noacct. If it - * did, we find a non-NULL value in bio_list and re-enter the loop - * from the top. In this case we really did just take the bio - * of the top of the list (no pretending) and so remove it from - * bio_list, and call into ->submit_bio() again. - */ - BUG_ON(bio->bi_next); - bio_list_init(&bio_list_on_stack[0]); - current->bio_list = bio_list_on_stack; - do { - struct request_queue *q = bio->bi_disk->queue; - - if (likely(bio_queue_enter(bio) == 0)) { - struct bio_list lower, same; - - /* Create a fresh bio_list for all subordinate requests */ - bio_list_on_stack[1] = bio_list_on_stack[0]; - bio_list_init(&bio_list_on_stack[0]); - ret = __submit_bio(bio); - - /* sort new bios into those for a lower level - * and those for the same level - */ - bio_list_init(&lower); - bio_list_init(&same); - while ((bio = bio_list_pop(&bio_list_on_stack[0])) != NULL) - if (q == bio->bi_disk->queue) - bio_list_add(&same, bio); - else - bio_list_add(&lower, bio); - /* now assemble so we handle the lowest level first */ - bio_list_merge(&bio_list_on_stack[0], &lower); - bio_list_merge(&bio_list_on_stack[0], &same); - bio_list_merge(&bio_list_on_stack[0], &bio_list_on_stack[1]); - } - bio = bio_list_pop(&bio_list_on_stack[0]); - } while (bio); - current->bio_list = NULL; /* deactivate */ - -out: - return ret; + return __submit_bio_noacct(bio); } EXPORT_SYMBOL(submit_bio_noacct); -- 2.26.2 _______________________________________________ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-leave@lists.01.org