All of lore.kernel.org
 help / color / mirror / Atom feed
From: Linus Walleij <linus.walleij@linaro.org>
To: linux-mmc@vger.kernel.org, Ulf Hansson <ulf.hansson@linaro.org>,
	Adrian Hunter <adrian.hunter@intel.com>,
	Paolo Valente <paolo.valente@linaro.org>
Cc: Chunyan Zhang <zhang.chunyan@linaro.org>,
	Baolin Wang <baolin.wang@linaro.org>,
	linux-block@vger.kernel.org, Jens Axboe <axboe@kernel.dk>,
	Christoph Hellwig <hch@lst.de>, Arnd Bergmann <arnd@arndb.de>,
	Linus Walleij <linus.walleij@linaro.org>
Subject: [PATCH 15/16] mmc: queue: issue requests in massive parallel
Date: Thu,  9 Feb 2017 16:34:02 +0100	[thread overview]
Message-ID: <20170209153403.9730-16-linus.walleij@linaro.org> (raw)
In-Reply-To: <20170209153403.9730-1-linus.walleij@linaro.org>

This makes a crucial change to the issueing mechanism for the
MMC requests:

Before commit "mmc: core: move the asynchronous post-processing"
some parallelism on the read/write requests was achieved by
speculatively postprocessing a request and re-preprocess and
re-issue the request if something went wrong, which we discover
later when checking for an error.

This is kind of ugly. Instead we need a mechanism like here:

We issue requests, and when they come back from the hardware,
we know if they finished successfully or not. If the request
was successful, we complete the asynchronous request and let a
new request immediately start on the hardware. If, and only if,
it returned an error from the hardware we go down the error
path.

This is achieved by splitting the work path from the hardware
in two: a successful path ending up calling down to
mmc_blk_rw_done_success() and an errorpath calling down to
mmc_blk_rw_done_error().

This has a profound effect: we reintroduce the parallelism on
the successful path as mmc_post_req() can now be called in
while the next request is in transit (just like prior to
commit "mmc: core: move the asynchronous post-processing")
but ALSO we can call mmc_queue_bounce_post() and
blk_end_request() in parallel.

The latter has the profound effect of issuing a new request
again so that we actually need to have at least three requests
in transit at the same time: we haven't yet dropped the
reference to our struct mmc_queue_req so we need at least
three. I put the pool to 4 requests for now.

I expect the imrovement to be noticeable on systems that use
bounce buffers since they can now process requests in parallel
with post-processing their bounce buffers, but I don't have a
test target for that.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
 drivers/mmc/core/block.c | 61 +++++++++++++++++++++++++++++++++++++-----------
 drivers/mmc/core/block.h |  4 +++-
 drivers/mmc/core/core.c  | 27 ++++++++++++++++++---
 drivers/mmc/core/queue.c |  2 +-
 4 files changed, 75 insertions(+), 19 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index acca15cc1807..f1008ce5376b 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -1622,8 +1622,51 @@ static void mmc_blk_rw_try_restart(struct mmc_queue_req *mq_rq)
 	mmc_restart_areq(mq->card->host, &mq_rq->areq);
 }
 
-void mmc_blk_rw_done(struct mmc_async_req *areq,
-		     enum mmc_blk_status status)
+/**
+ * Final handling of an asynchronous request if there was no error.
+ * This is the common path that we take when everything is nice
+ * and smooth. The status from the command is always MMC_BLK_SUCCESS.
+ */
+void mmc_blk_rw_done_success(struct mmc_async_req *areq)
+{
+	struct mmc_queue_req *mq_rq;
+	struct mmc_blk_request *brq;
+	struct mmc_blk_data *md;
+	struct request *old_req;
+	bool req_pending;
+	int type;
+
+	mq_rq =	container_of(areq, struct mmc_queue_req, areq);
+	md = mq_rq->mq->blkdata;
+	brq = &mq_rq->brq;
+	old_req = mq_rq->req;
+	type = rq_data_dir(old_req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE;
+
+	mmc_queue_bounce_post(mq_rq);
+	mmc_blk_reset_success(md, type);
+	req_pending = blk_end_request(old_req, 0,
+				      brq->data.bytes_xfered);
+	/*
+	 * If the blk_end_request function returns non-zero even
+	 * though all data has been transferred and no errors
+	 * were returned by the host controller, it's a bug.
+	 */
+	if (req_pending) {
+		pr_err("%s BUG rq_tot %d d_xfer %d\n",
+		       __func__, blk_rq_bytes(old_req),
+		       brq->data.bytes_xfered);
+		return;
+	}
+}
+
+/**
+ * Error, recapture, retry etc for asynchronous requests.
+ * This is the error path that we take when there is bad status
+ * coming back from the hardware and we need to do a bit of
+ * cleverness.
+ */
+void mmc_blk_rw_done_error(struct mmc_async_req *areq,
+			   enum mmc_blk_status status)
 {
 	struct mmc_queue *mq;
 	struct mmc_queue_req *mq_rq;
@@ -1652,6 +1695,8 @@ void mmc_blk_rw_done(struct mmc_async_req *areq,
 
 	switch (status) {
 	case MMC_BLK_SUCCESS:
+		pr_err("%s: MMC_BLK_SUCCESS on error path\n", __func__);
+		/* This should not happen: anyway fall through */
 	case MMC_BLK_PARTIAL:
 		/*
 		 * A block was successfully transferred.
@@ -1660,18 +1705,6 @@ void mmc_blk_rw_done(struct mmc_async_req *areq,
 
 		req_pending = blk_end_request(old_req, 0,
 					      brq->data.bytes_xfered);
-		/*
-		 * If the blk_end_request function returns non-zero even
-		 * though all data has been transferred and no errors
-		 * were returned by the host controller, it's a bug.
-		 */
-		if (status == MMC_BLK_SUCCESS && req_pending) {
-			pr_err("%s BUG rq_tot %d d_xfer %d\n",
-			       __func__, blk_rq_bytes(old_req),
-			       brq->data.bytes_xfered);
-			mmc_blk_rw_cmd_abort(card, old_req);
-			return;
-		}
 		break;
 	case MMC_BLK_CMD_ERR:
 		req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending);
diff --git a/drivers/mmc/core/block.h b/drivers/mmc/core/block.h
index 0326fa5d8217..eae47ae7c903 100644
--- a/drivers/mmc/core/block.h
+++ b/drivers/mmc/core/block.h
@@ -5,7 +5,9 @@ struct mmc_async_req;
 enum mmc_blk_status;
 struct mmc_queue_req;
 
-void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status status);
+void mmc_blk_rw_done_success(struct mmc_async_req *areq);
+void mmc_blk_rw_done_error(struct mmc_async_req *areq,
+			   enum mmc_blk_status status);
 void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq);
 
 #endif
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 50a8942b98c2..04666ad91df0 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -634,11 +634,32 @@ void mmc_finalize_areq(struct kthread_work *work)
 		mmc_start_bkops(host->card, true);
 	}
 
-	/* Successfully postprocess the old request at this point */
-	mmc_post_req(host, areq->mrq, 0);
-	mmc_blk_rw_done(areq, status);
+	/*
+	 * Postprocess the old request at this point:
+	 * on success: take a fast path!
+	 * on error: take the slow path, retrying etc
+	 */
+	if (status != MMC_BLK_SUCCESS) {
+		mmc_post_req(host, areq->mrq, 0);
+		/*
+		 * This call can lead to retransmissions using
+		 * mmc_restart_areq() so do not complete until
+		 * after this call!
+		 */
+		mmc_blk_rw_done_error(areq, status);
+		complete(&areq->complete);
+		mmc_queue_req_put(mq_rq);
+		return;
+	}
 
+	/*
+	 * There will not be any retransmissions etc
+	 * at this point, so let the next request get
+	 * access to the hardware.
+	 */
 	complete(&areq->complete);
+	mmc_post_req(host, areq->mrq, 0);
+	mmc_blk_rw_done_success(areq);
 	mmc_queue_req_put(mq_rq);
 }
 EXPORT_SYMBOL(mmc_finalize_areq);
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index cab0f51dbb4d..e7ba5bef2df3 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -307,7 +307,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 	if (!mq->queue)
 		return -ENOMEM;
 
-	mq->qdepth = 2;
+	mq->qdepth = 4;
 	spin_lock_init(&mq->mqrq_lock);
 	mq->mqrq = kcalloc(mq->qdepth, sizeof(struct mmc_queue_req),
 			   GFP_KERNEL);
-- 
2.9.3

  parent reply	other threads:[~2017-02-09 15:34 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-02-09 15:33 [PATCH 00/16] multiqueue for MMC/SD third try Linus Walleij
2017-02-09 15:33 ` [PATCH 01/16] mmc: core: move some code in mmc_start_areq() Linus Walleij
     [not found]   ` <CGME20170228145506epcas1p1dd72cc5738c3f36df97ac06603ad2731@epcas1p1.samsung.com>
2017-02-28 14:55     ` Bartlomiej Zolnierkiewicz
2017-02-09 15:33 ` [PATCH 02/16] mmc: core: refactor asynchronous request finalization Linus Walleij
     [not found]   ` <CGME20170228145552epcas5p4a43c23971d58b30ad6ab9d2c612abe9a@epcas5p4.samsung.com>
2017-02-28 14:55     ` Bartlomiej Zolnierkiewicz
2017-02-09 15:33 ` [PATCH 03/16] mmc: core: refactor mmc_request_done() Linus Walleij
     [not found]   ` <CGME20170228145627epcas1p18fb6390b7ae14a6961fac9703712e0a0@epcas1p1.samsung.com>
2017-02-28 14:56     ` Bartlomiej Zolnierkiewicz
2017-02-09 15:33 ` [PATCH 04/16] mmc: core: move the asynchronous post-processing Linus Walleij
2017-02-09 15:33 ` [PATCH 05/16] mmc: core: add a kthread for completing requests Linus Walleij
     [not found]   ` <CGME20170228145719epcas5p33d013fd48483bfba477b3f607dcdccb4@epcas5p3.samsung.com>
2017-02-28 14:57     ` Bartlomiej Zolnierkiewicz
2017-02-09 15:33 ` [PATCH 06/16] mmc: core: replace waitqueue with worker Linus Walleij
2017-02-22 13:29   ` Adrian Hunter
2017-03-09 22:49     ` Linus Walleij
2017-03-10 14:21       ` Adrian Hunter
2017-03-10 22:05         ` Jens Axboe
2017-03-13  9:25           ` Adrian Hunter
2017-03-13 14:19             ` Jens Axboe
2017-03-14 12:59               ` Adrian Hunter
2017-03-14 14:36                 ` Jens Axboe
2017-03-14 14:43                   ` Christoph Hellwig
2017-03-14 14:52                     ` Jens Axboe
2017-03-28  7:47                   ` Linus Walleij
2017-03-28  7:46         ` Linus Walleij
     [not found]   ` <CGME20170228161023epcas5p3916c2e171d57b8c7814be7841fbab3aa@epcas5p3.samsung.com>
2017-02-28 16:10     ` Bartlomiej Zolnierkiewicz
2017-02-09 15:33 ` [PATCH 07/16] mmc: core: do away with is_done_rcv Linus Walleij
     [not found]   ` <CGME20170228161047epcas1p2f307733cb1c441d0c290e794a04a06a8@epcas1p2.samsung.com>
2017-02-28 16:10     ` Bartlomiej Zolnierkiewicz
2017-02-09 15:33 ` [PATCH 08/16] mmc: core: do away with is_new_req Linus Walleij
     [not found]   ` <CGME20170228161102epcas5p25dc3b560013599fda6cc750f6d528595@epcas5p2.samsung.com>
2017-02-28 16:11     ` Bartlomiej Zolnierkiewicz
2017-02-09 15:33 ` [PATCH 09/16] mmc: core: kill off the context info Linus Walleij
     [not found]   ` <CGME20170228161117epcas5p20a6e62146733466b98c0ef4ea6efbb5f@epcas5p2.samsung.com>
2017-02-28 16:11     ` Bartlomiej Zolnierkiewicz
2017-02-09 15:33 ` [PATCH 10/16] mmc: queue: simplify queue logic Linus Walleij
     [not found]   ` <CGME20170228161132epcas5p265793e8675aa2f1e5dd199a9ee0ab6f1@epcas5p2.samsung.com>
2017-02-28 16:11     ` Bartlomiej Zolnierkiewicz
2017-02-09 15:33 ` [PATCH 11/16] mmc: block: shuffle retry and error handling Linus Walleij
     [not found]   ` <CGME20170228174522epcas5p34dce6477eb96f7e0fb38431c4de35f60@epcas5p3.samsung.com>
2017-02-28 17:45     ` Bartlomiej Zolnierkiewicz
     [not found]       ` <CGME20170301114559epcas5p1a0c32fbc3a5573a6f1c6291792ea1b2e@epcas5p1.samsung.com>
2017-03-01 11:45         ` Bartlomiej Zolnierkiewicz
     [not found]           ` <CGME20170301155243epcas1p1140ce11db60b31065a0356525a2ee0a0@epcas1p1.samsung.com>
2017-03-01 15:52             ` Bartlomiej Zolnierkiewicz
     [not found]               ` <CGME20170301155822epcas5p103373c6afbd516e4792ebef9bb202b94@epcas5p1.samsung.com>
2017-03-01 15:58                 ` Bartlomiej Zolnierkiewicz
     [not found]               ` <CGME20170301174856epcas5p16bdf861a0117a33f9dad37a81449a95e@epcas5p1.samsung.com>
2017-03-01 17:48                 ` Bartlomiej Zolnierkiewicz
2017-02-09 15:33 ` [PATCH 12/16] mmc: queue: stop flushing the pipeline with NULL Linus Walleij
     [not found]   ` <CGME20170228180309epcas5p317af83f41d3b0426868dcfd660bd0aec@epcas5p3.samsung.com>
2017-02-28 18:03     ` Bartlomiej Zolnierkiewicz
2017-02-09 15:34 ` [PATCH 13/16] mmc: queue: issue struct mmc_queue_req items Linus Walleij
     [not found]   ` <CGME20170228181009epcas1p4ca0e714214097d07d7172182ba8e032b@epcas1p4.samsung.com>
2017-02-28 18:10     ` Bartlomiej Zolnierkiewicz
2017-02-09 15:34 ` [PATCH 14/16] mmc: queue: get/put struct mmc_queue_req Linus Walleij
     [not found]   ` <CGME20170228182149epcas1p28789bce5433cee1579e8b8d083ba5811@epcas1p2.samsung.com>
2017-02-28 18:21     ` Bartlomiej Zolnierkiewicz
2017-02-09 15:34 ` Linus Walleij [this message]
     [not found]   ` <CGME20170301120247epcas1p1ad2be24dc9bbd1218dab8f565fb82b27@epcas1p1.samsung.com>
2017-03-01 12:02     ` [PATCH 15/16] mmc: queue: issue requests in massive parallel Bartlomiej Zolnierkiewicz
2017-02-09 15:34 ` [PATCH 16/16] RFC: mmc: switch MMC/SD to use blk-mq multiqueueing v3 Linus Walleij
2017-02-09 15:39 ` [PATCH 00/16] multiqueue for MMC/SD third try Christoph Hellwig
2017-02-11 13:03 ` Avri Altman
2017-02-11 13:03   ` Avri Altman
2017-02-12 16:16   ` Linus Walleij

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170209153403.9730-16-linus.walleij@linaro.org \
    --to=linus.walleij@linaro.org \
    --cc=adrian.hunter@intel.com \
    --cc=arnd@arndb.de \
    --cc=axboe@kernel.dk \
    --cc=baolin.wang@linaro.org \
    --cc=hch@lst.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-mmc@vger.kernel.org \
    --cc=paolo.valente@linaro.org \
    --cc=ulf.hansson@linaro.org \
    --cc=zhang.chunyan@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.