All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/12 v5] Multiqueue for MMC/SD
@ 2017-11-10 10:01 ` Linus Walleij
  2017-11-10 10:01   ` [PATCH 01/12 v5] mmc: core: move the asynchronous post-processing Linus Walleij
                     ` (14 more replies)
  0 siblings, 15 replies; 22+ messages in thread
From: Linus Walleij @ 2017-11-10 10:01 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Avri Altman,
	Adrian Hunter, Linus Walleij

This is the fifth iteration of this patch set.

I *HOPE* that we can scrap this patch set and merge Adrian's
patches instead, because they also bring CQE support which is
nice. I had some review comments on his series, mainly that
it needs to kill off the legacy block layer code path that
noone likes anyway.

So this is mainly an academic and inspirational exercise.
Whatever remains of this refactoring, if anything, I can
certainly do on top of Adrian's patches as well.

What changed since v4 is the error path, since Adrian pointed
out that the error handling seems to be fragile. It was indeed
fragile... To make sure things work properly I have run long
test rounds with fault injection, essentially:

Enable FAULT_INJECTION, FAULT_INJECTION_DEBUG_FS,
       FAIL_MMC_REQUEST
cd /debug/mmc3/fail_mmc_request/
echo 1 > probability
echo -1 > times

Then running a dd to the card, also increased the error rate
to 10% and completed tests successfully, but at this error
rate the MMC stack sometimes exceeds the retry limit and the
dd command fails (as is appropriate).

Removing a card during I/O does not work well however :/
So I guess I would need to work on that if this series should
continue. (Hopefully unlikely.)


Linus Walleij (12):
  mmc: core: move the asynchronous post-processing
  mmc: core: add a workqueue for completing requests
  mmc: core: replace waitqueue with worker
  mmc: core: do away with is_done_rcv
  mmc: core: do away with is_new_req
  mmc: core: kill off the context info
  mmc: queue: simplify queue logic
  mmc: block: shuffle retry and error handling
  mmc: queue: stop flushing the pipeline with NULL
  mmc: queue/block: pass around struct mmc_queue_req*s
  mmc: block: issue requests in massive parallel
  mmc: switch MMC/SD to use blk-mq multiqueueing v5

 drivers/mmc/core/block.c    | 557 +++++++++++++++++++++++---------------------
 drivers/mmc/core/block.h    |   5 +-
 drivers/mmc/core/bus.c      |   1 -
 drivers/mmc/core/core.c     | 217 ++++++++++-------
 drivers/mmc/core/core.h     |  11 +-
 drivers/mmc/core/host.c     |   1 -
 drivers/mmc/core/mmc_test.c |  31 +--
 drivers/mmc/core/queue.c    | 252 ++++++++------------
 drivers/mmc/core/queue.h    |  16 +-
 include/linux/mmc/core.h    |   3 +-
 include/linux/mmc/host.h    |  31 +--
 11 files changed, 557 insertions(+), 568 deletions(-)

-- 
2.13.6

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 01/12 v5] mmc: core: move the asynchronous post-processing
  2017-11-10 10:01 ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
@ 2017-11-10 10:01   ` Linus Walleij
  2017-11-10 10:01   ` [PATCH 02/12 v5] mmc: core: add a workqueue for completing requests Linus Walleij
                     ` (13 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Linus Walleij @ 2017-11-10 10:01 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Avri Altman,
	Adrian Hunter, Linus Walleij

This moves the asynchronous post-processing of a request over
to the finalization function.

The patch has a slight semantic change:

Both places will be in the code path for if (host->areq) and
in the same sequence, but before this patch, the next request
was started before performing post-processing.

The effect is that whereas before, the post- and preprocessing
happened after starting the next request, now the preprocessing
will happen after the request is done and before the next has
started which would cut half of the pre/post optimizations out.

In the later patch named "mmc: core: replace waitqueue with
worker" we move the finalization to a worker started by
mmc_request_done() and in the patch named
"mmc: block: issue requests in massive parallel" we introduce
a forked success/failure path that can quickly complete
requests when they come back from the hardware.

These two later patches together restore the same optimization
but in a more elegant manner that avoids the need to flush the
two-stage pipleline with NULL, something we remove between these
two patches in the commit named
"mmc: queue: stop flushing the pipeline with NULL".

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v1->v5:
- Rebasing on the "next" branch in the MMC tree.
---
 drivers/mmc/core/core.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 1f0f44f4dd5f..e2366a82eebe 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -746,6 +746,9 @@ static enum mmc_blk_status mmc_finalize_areq(struct mmc_host *host)
 		mmc_start_bkops(host->card, true);
 	}
 
+	/* Successfully postprocess the old request at this point */
+	mmc_post_req(host, host->areq->mrq, 0);
+
 	return status;
 }
 
@@ -790,10 +793,6 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
 	if (status == MMC_BLK_SUCCESS && areq)
 		start_err = __mmc_start_data_req(host, areq->mrq);
 
-	/* Postprocess the old request at this point */
-	if (host->areq)
-		mmc_post_req(host, host->areq->mrq, 0);
-
 	/* Cancel a prepared request if it was not started. */
 	if ((status != MMC_BLK_SUCCESS || start_err) && areq)
 		mmc_post_req(host, areq->mrq, -EINVAL);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 02/12 v5] mmc: core: add a workqueue for completing requests
  2017-11-10 10:01 ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
  2017-11-10 10:01   ` [PATCH 01/12 v5] mmc: core: move the asynchronous post-processing Linus Walleij
@ 2017-11-10 10:01   ` Linus Walleij
  2017-11-10 10:01   ` [PATCH 03/12 v5] mmc: core: replace waitqueue with worker Linus Walleij
                     ` (12 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Linus Walleij @ 2017-11-10 10:01 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Avri Altman,
	Adrian Hunter, Linus Walleij

As we want to complete requests autonomously from feeding the
host with new requests, we create a workqueue to deal with
this specifically in response to the callback from a host driver.
This is necessary to exploit parallelism properly.

This patch just adds the workqueu, later patches will make use of
it.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v1->v5:
- Rebasing on the "next" branch in the MMC tree.
---
 drivers/mmc/core/core.c  | 9 +++++++++
 drivers/mmc/core/host.c  | 1 -
 include/linux/mmc/host.h | 4 ++++
 3 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index e2366a82eebe..73ebee12e67b 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -2838,6 +2838,14 @@ void mmc_start_host(struct mmc_host *host)
 	host->f_init = max(freqs[0], host->f_min);
 	host->rescan_disable = 0;
 	host->ios.power_mode = MMC_POWER_UNDEFINED;
+	/* Workqueue for completing requests */
+	host->req_done_wq = alloc_workqueue("mmc%d-reqdone",
+				WQ_FREEZABLE | WQ_HIGHPRI | WQ_MEM_RECLAIM,
+				0, host->index);
+	if (!host->req_done_wq) {
+		dev_err(mmc_dev(host), "could not allocate workqueue\n");
+		return;
+	}
 
 	if (!(host->caps2 & MMC_CAP2_NO_PRESCAN_POWERUP)) {
 		mmc_claim_host(host);
@@ -2859,6 +2867,7 @@ void mmc_stop_host(struct mmc_host *host)
 
 	host->rescan_disable = 1;
 	cancel_delayed_work_sync(&host->detect);
+	destroy_workqueue(host->req_done_wq);
 
 	/* clear pm flags now and let card drivers set them as needed */
 	host->pm_flags = 0;
diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
index 35a9e4fd1a9f..88033294832f 100644
--- a/drivers/mmc/core/host.c
+++ b/drivers/mmc/core/host.c
@@ -390,7 +390,6 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
 	INIT_DELAYED_WORK(&host->detect, mmc_rescan);
 	INIT_DELAYED_WORK(&host->sdio_irq_work, sdio_irq_work);
 	setup_timer(&host->retune_timer, mmc_retune_timer, (unsigned long)host);
-
 	/*
 	 * By default, hosts do not support SGIO or large requests.
 	 * They have to set these according to their abilities.
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index e7743eca1021..e4fa7058c288 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -13,6 +13,7 @@
 #include <linux/sched.h>
 #include <linux/device.h>
 #include <linux/fault-inject.h>
+#include <linux/workqueue.h>
 
 #include <linux/mmc/core.h>
 #include <linux/mmc/card.h>
@@ -425,6 +426,9 @@ struct mmc_host {
 	struct mmc_async_req	*areq;		/* active async req */
 	struct mmc_context_info	context_info;	/* async synchronization info */
 
+	/* finalization workqueue, handles finalizing requests */
+	struct workqueue_struct	*req_done_wq;
+
 	/* Ongoing data transfer that allows commands during transfer */
 	struct mmc_request	*ongoing_mrq;
 
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 03/12 v5] mmc: core: replace waitqueue with worker
  2017-11-10 10:01 ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
  2017-11-10 10:01   ` [PATCH 01/12 v5] mmc: core: move the asynchronous post-processing Linus Walleij
  2017-11-10 10:01   ` [PATCH 02/12 v5] mmc: core: add a workqueue for completing requests Linus Walleij
@ 2017-11-10 10:01   ` Linus Walleij
  2017-11-10 10:01   ` [PATCH 04/12] mmc: core: do away with is_done_rcv Linus Walleij
                     ` (11 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Linus Walleij @ 2017-11-10 10:01 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Avri Altman,
	Adrian Hunter, Linus Walleij

The waitqueue in the host context is there to signal back from
mmc_request_done() through mmc_wait_data_done() that the hardware
is done with a command, and when the wait is over, the core
will typically submit the next asynchronous request that is pending
just waiting for the hardware to be available.

This is in the way for letting the mmc_request_done() trigger the
report up to the block layer that a block request is finished.

Re-jig this as a first step, remvoving the waitqueue and introducing
a work that will run after a completed asynchronous request,
finalizing that request, including retransmissions, and eventually
reporting back with a completion and a status code to the
asynchronous issue method.

This has the upside that we can remove the MMC_BLK_NEW_REQUEST
status code and the "new_request" state in the request queue
that is only there to make the state machine spin out
the first time we send a request.

Use the workqueue we introduced in the host for handling just
this, and then add a work and completion in the asynchronous
request to deal with this mechanism.

We introduce a pointer from mmc_request back to the asynchronous
request so these can be referenced from each other, and
augment mmc_wait_data_done() to use this pointer to get at the
areq and kick the worker since that function is only used by
asynchronous requests anyway.

This is a central change that let us do many other changes since
we have broken the submit and complete code paths in two, and we
can potentially remove the NULL flushing of the asynchronous
pipeline and report block requests as finished directly from
the worker.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v1->v5:
- Rebasing on the "next" branch in the MMC tree.
---
 drivers/mmc/core/block.c |  3 ++
 drivers/mmc/core/core.c  | 93 ++++++++++++++++++++++++------------------------
 drivers/mmc/core/core.h  |  2 ++
 drivers/mmc/core/queue.c |  1 -
 include/linux/mmc/core.h |  3 +-
 include/linux/mmc/host.h |  7 ++--
 6 files changed, 59 insertions(+), 50 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index ea80ff4cd7f9..5c84175e49be 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -1712,6 +1712,7 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
 	mmc_blk_data_prep(mq, mqrq, disable_multi, &do_rel_wr, &do_data_tag);
 
 	brq->mrq.cmd = &brq->cmd;
+	brq->mrq.areq = NULL;
 
 	brq->cmd.arg = blk_rq_pos(req);
 	if (!mmc_card_blockaddr(card))
@@ -1764,6 +1765,8 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
 	}
 
 	mqrq->areq.err_check = mmc_blk_err_check;
+	mqrq->areq.host = card->host;
+	INIT_WORK(&mqrq->areq.finalization_work, mmc_finalize_areq);
 }
 
 static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card,
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 73ebee12e67b..7440daa2f559 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -369,10 +369,15 @@ EXPORT_SYMBOL(mmc_start_request);
  */
 static void mmc_wait_data_done(struct mmc_request *mrq)
 {
-	struct mmc_context_info *context_info = &mrq->host->context_info;
+	struct mmc_host *host = mrq->host;
+	struct mmc_context_info *context_info = &host->context_info;
+	struct mmc_async_req *areq = mrq->areq;
 
 	context_info->is_done_rcv = true;
-	wake_up_interruptible(&context_info->wait);
+	/* Schedule a work to deal with finalizing this request */
+	if (!areq)
+		pr_err("areq of the data mmc_request was NULL!\n");
+	queue_work(host->req_done_wq, &areq->finalization_work);
 }
 
 static void mmc_wait_done(struct mmc_request *mrq)
@@ -695,43 +700,34 @@ static void mmc_post_req(struct mmc_host *host, struct mmc_request *mrq,
  * Returns the status of the ongoing asynchronous request, but
  * MMC_BLK_SUCCESS if no request was going on.
  */
-static enum mmc_blk_status mmc_finalize_areq(struct mmc_host *host)
+void mmc_finalize_areq(struct work_struct *work)
 {
+	struct mmc_async_req *areq =
+		container_of(work, struct mmc_async_req, finalization_work);
+	struct mmc_host *host = areq->host;
 	struct mmc_context_info *context_info = &host->context_info;
-	enum mmc_blk_status status;
-
-	if (!host->areq)
-		return MMC_BLK_SUCCESS;
-
-	while (1) {
-		wait_event_interruptible(context_info->wait,
-				(context_info->is_done_rcv ||
-				 context_info->is_new_req));
+	enum mmc_blk_status status = MMC_BLK_SUCCESS;
 
-		if (context_info->is_done_rcv) {
-			struct mmc_command *cmd;
+	if (context_info->is_done_rcv) {
+		struct mmc_command *cmd;
 
-			context_info->is_done_rcv = false;
-			cmd = host->areq->mrq->cmd;
+		context_info->is_done_rcv = false;
+		cmd = areq->mrq->cmd;
 
-			if (!cmd->error || !cmd->retries ||
-			    mmc_card_removed(host->card)) {
-				status = host->areq->err_check(host->card,
-							       host->areq);
-				break; /* return status */
-			} else {
-				mmc_retune_recheck(host);
-				pr_info("%s: req failed (CMD%u): %d, retrying...\n",
-					mmc_hostname(host),
-					cmd->opcode, cmd->error);
-				cmd->retries--;
-				cmd->error = 0;
-				__mmc_start_request(host, host->areq->mrq);
-				continue; /* wait for done/new event again */
-			}
+		if (!cmd->error || !cmd->retries ||
+		    mmc_card_removed(host->card)) {
+			status = areq->err_check(host->card,
+						 areq);
+		} else {
+			mmc_retune_recheck(host);
+			pr_info("%s: req failed (CMD%u): %d, retrying...\n",
+				mmc_hostname(host),
+				cmd->opcode, cmd->error);
+			cmd->retries--;
+			cmd->error = 0;
+			__mmc_start_request(host, areq->mrq);
+			return; /* wait for done/new event again */
 		}
-
-		return MMC_BLK_NEW_REQUEST;
 	}
 
 	mmc_retune_release(host);
@@ -740,17 +736,19 @@ static enum mmc_blk_status mmc_finalize_areq(struct mmc_host *host)
 	 * Check BKOPS urgency for each R1 response
 	 */
 	if (host->card && mmc_card_mmc(host->card) &&
-	    ((mmc_resp_type(host->areq->mrq->cmd) == MMC_RSP_R1) ||
-	     (mmc_resp_type(host->areq->mrq->cmd) == MMC_RSP_R1B)) &&
-	    (host->areq->mrq->cmd->resp[0] & R1_EXCEPTION_EVENT)) {
+	    ((mmc_resp_type(areq->mrq->cmd) == MMC_RSP_R1) ||
+	     (mmc_resp_type(areq->mrq->cmd) == MMC_RSP_R1B)) &&
+	    (areq->mrq->cmd->resp[0] & R1_EXCEPTION_EVENT)) {
 		mmc_start_bkops(host->card, true);
 	}
 
 	/* Successfully postprocess the old request at this point */
-	mmc_post_req(host, host->areq->mrq, 0);
+	mmc_post_req(host, areq->mrq, 0);
 
-	return status;
+	areq->finalization_status = status;
+	complete(&areq->complete);
 }
+EXPORT_SYMBOL(mmc_finalize_areq);
 
 /**
  *	mmc_start_areq - start an asynchronous request
@@ -780,18 +778,22 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
 	if (areq)
 		mmc_pre_req(host, areq->mrq);
 
-	/* Finalize previous request */
-	status = mmc_finalize_areq(host);
+	/* Finalize previous request, if there is one */
+	if (previous) {
+		wait_for_completion(&previous->complete);
+		status = previous->finalization_status;
+	} else {
+		status = MMC_BLK_SUCCESS;
+	}
 	if (ret_stat)
 		*ret_stat = status;
 
-	/* The previous request is still going on... */
-	if (status == MMC_BLK_NEW_REQUEST)
-		return NULL;
-
 	/* Fine so far, start the new request! */
-	if (status == MMC_BLK_SUCCESS && areq)
+	if (status == MMC_BLK_SUCCESS && areq) {
+		init_completion(&areq->complete);
+		areq->mrq->areq = areq;
 		start_err = __mmc_start_data_req(host, areq->mrq);
+	}
 
 	/* Cancel a prepared request if it was not started. */
 	if ((status != MMC_BLK_SUCCESS || start_err) && areq)
@@ -3015,7 +3017,6 @@ void mmc_init_context_info(struct mmc_host *host)
 	host->context_info.is_new_req = false;
 	host->context_info.is_done_rcv = false;
 	host->context_info.is_waiting_last_req = false;
-	init_waitqueue_head(&host->context_info.wait);
 }
 
 static int __init mmc_init(void)
diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h
index 71e6c6d7ceb7..e493d9d73fe2 100644
--- a/drivers/mmc/core/core.h
+++ b/drivers/mmc/core/core.h
@@ -13,6 +13,7 @@
 
 #include <linux/delay.h>
 #include <linux/sched.h>
+#include <linux/workqueue.h>
 
 struct mmc_host;
 struct mmc_card;
@@ -112,6 +113,7 @@ int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq);
 
 struct mmc_async_req;
 
+void mmc_finalize_areq(struct work_struct *work);
 struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
 				     struct mmc_async_req *areq,
 				     enum mmc_blk_status *ret_stat);
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 4f33d277b125..c46be4402803 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -111,7 +111,6 @@ static void mmc_request_fn(struct request_queue *q)
 
 	if (cntx->is_waiting_last_req) {
 		cntx->is_new_req = true;
-		wake_up_interruptible(&cntx->wait);
 	}
 
 	if (mq->asleep)
diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h
index 927519385482..d755ef8ea880 100644
--- a/include/linux/mmc/core.h
+++ b/include/linux/mmc/core.h
@@ -13,6 +13,7 @@
 
 struct mmc_data;
 struct mmc_request;
+struct mmc_async_req;
 
 enum mmc_blk_status {
 	MMC_BLK_SUCCESS = 0,
@@ -23,7 +24,6 @@ enum mmc_blk_status {
 	MMC_BLK_DATA_ERR,
 	MMC_BLK_ECC_ERR,
 	MMC_BLK_NOMEDIUM,
-	MMC_BLK_NEW_REQUEST,
 };
 
 struct mmc_command {
@@ -155,6 +155,7 @@ struct mmc_request {
 
 	struct completion	completion;
 	struct completion	cmd_completion;
+	struct mmc_async_req	*areq; /* pointer to areq if any */
 	void			(*done)(struct mmc_request *);/* completion function */
 	/*
 	 * Notify uppers layers (e.g. mmc block driver) that recovery is needed
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index e4fa7058c288..d2ff79a16839 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -14,6 +14,7 @@
 #include <linux/device.h>
 #include <linux/fault-inject.h>
 #include <linux/workqueue.h>
+#include <linux/completion.h>
 
 #include <linux/mmc/core.h>
 #include <linux/mmc/card.h>
@@ -215,6 +216,10 @@ struct mmc_async_req {
 	 * Returns 0 if success otherwise non zero.
 	 */
 	enum mmc_blk_status (*err_check)(struct mmc_card *, struct mmc_async_req *);
+	struct work_struct finalization_work;
+	enum mmc_blk_status finalization_status;
+	struct completion complete;
+	struct mmc_host *host;
 };
 
 /**
@@ -239,13 +244,11 @@ struct mmc_slot {
  * @is_done_rcv		wake up reason was done request
  * @is_new_req		wake up reason was new request
  * @is_waiting_last_req	mmc context waiting for single running request
- * @wait		wait queue
  */
 struct mmc_context_info {
 	bool			is_done_rcv;
 	bool			is_new_req;
 	bool			is_waiting_last_req;
-	wait_queue_head_t	wait;
 };
 
 struct regulator;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 04/12] mmc: core: do away with is_done_rcv
  2017-11-10 10:01 ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
                     ` (2 preceding siblings ...)
  2017-11-10 10:01   ` [PATCH 03/12 v5] mmc: core: replace waitqueue with worker Linus Walleij
@ 2017-11-10 10:01   ` Linus Walleij
  2017-11-10 10:01   ` [PATCH 05/12] mmc: core: do away with is_new_req Linus Walleij
                     ` (10 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Linus Walleij @ 2017-11-10 10:01 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Avri Altman,
	Adrian Hunter, Linus Walleij

The "is_done_rcv" in the context info for the host is no longer
needed: it is clear from context (ha!) that as long as we are
waiting for the asynchronous request to come to completion,
we are not done receiving data, and when the finalization work
has run and completed the completion, we are indeed done.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v1->v5:
- Rebasing on the "next" branch in the MMC tree.
---
 drivers/mmc/core/core.c  | 40 ++++++++++++++++------------------------
 include/linux/mmc/host.h |  2 --
 2 files changed, 16 insertions(+), 26 deletions(-)

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 7440daa2f559..15a664d3c199 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -370,10 +370,8 @@ EXPORT_SYMBOL(mmc_start_request);
 static void mmc_wait_data_done(struct mmc_request *mrq)
 {
 	struct mmc_host *host = mrq->host;
-	struct mmc_context_info *context_info = &host->context_info;
 	struct mmc_async_req *areq = mrq->areq;
 
-	context_info->is_done_rcv = true;
 	/* Schedule a work to deal with finalizing this request */
 	if (!areq)
 		pr_err("areq of the data mmc_request was NULL!\n");
@@ -656,7 +654,7 @@ EXPORT_SYMBOL(mmc_cqe_recovery);
 bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq)
 {
 	if (host->areq)
-		return host->context_info.is_done_rcv;
+		return completion_done(&host->areq->complete);
 	else
 		return completion_done(&mrq->completion);
 }
@@ -705,29 +703,24 @@ void mmc_finalize_areq(struct work_struct *work)
 	struct mmc_async_req *areq =
 		container_of(work, struct mmc_async_req, finalization_work);
 	struct mmc_host *host = areq->host;
-	struct mmc_context_info *context_info = &host->context_info;
 	enum mmc_blk_status status = MMC_BLK_SUCCESS;
+	struct mmc_command *cmd;
 
-	if (context_info->is_done_rcv) {
-		struct mmc_command *cmd;
-
-		context_info->is_done_rcv = false;
-		cmd = areq->mrq->cmd;
+	cmd = areq->mrq->cmd;
 
-		if (!cmd->error || !cmd->retries ||
-		    mmc_card_removed(host->card)) {
-			status = areq->err_check(host->card,
-						 areq);
-		} else {
-			mmc_retune_recheck(host);
-			pr_info("%s: req failed (CMD%u): %d, retrying...\n",
-				mmc_hostname(host),
-				cmd->opcode, cmd->error);
-			cmd->retries--;
-			cmd->error = 0;
-			__mmc_start_request(host, areq->mrq);
-			return; /* wait for done/new event again */
-		}
+	if (!cmd->error || !cmd->retries ||
+	    mmc_card_removed(host->card)) {
+		status = areq->err_check(host->card,
+					 areq);
+	} else {
+		mmc_retune_recheck(host);
+		pr_info("%s: req failed (CMD%u): %d, retrying...\n",
+			mmc_hostname(host),
+			cmd->opcode, cmd->error);
+		cmd->retries--;
+		cmd->error = 0;
+		__mmc_start_request(host, areq->mrq);
+		return; /* wait for done/new event again */
 	}
 
 	mmc_retune_release(host);
@@ -3015,7 +3008,6 @@ void mmc_unregister_pm_notifier(struct mmc_host *host)
 void mmc_init_context_info(struct mmc_host *host)
 {
 	host->context_info.is_new_req = false;
-	host->context_info.is_done_rcv = false;
 	host->context_info.is_waiting_last_req = false;
 }
 
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index d2ff79a16839..d43d26562fae 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -241,12 +241,10 @@ struct mmc_slot {
 
 /**
  * mmc_context_info - synchronization details for mmc context
- * @is_done_rcv		wake up reason was done request
  * @is_new_req		wake up reason was new request
  * @is_waiting_last_req	mmc context waiting for single running request
  */
 struct mmc_context_info {
-	bool			is_done_rcv;
 	bool			is_new_req;
 	bool			is_waiting_last_req;
 };
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 05/12] mmc: core: do away with is_new_req
  2017-11-10 10:01 ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
                     ` (3 preceding siblings ...)
  2017-11-10 10:01   ` [PATCH 04/12] mmc: core: do away with is_done_rcv Linus Walleij
@ 2017-11-10 10:01   ` Linus Walleij
  2017-11-10 10:01   ` [PATCH 06/12 v5] mmc: core: kill off the context info Linus Walleij
                     ` (9 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Linus Walleij @ 2017-11-10 10:01 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Avri Altman,
	Adrian Hunter, Linus Walleij

The host context member "is_new_req" is only assigned values,
never checked. Delete it.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v1->v5:
- Rebasing on the "next" branch in the MMC tree.
---
 drivers/mmc/core/core.c  | 1 -
 drivers/mmc/core/queue.c | 5 -----
 include/linux/mmc/host.h | 2 --
 3 files changed, 8 deletions(-)

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 15a664d3c199..b1a5059f6cd1 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -3007,7 +3007,6 @@ void mmc_unregister_pm_notifier(struct mmc_host *host)
  */
 void mmc_init_context_info(struct mmc_host *host)
 {
-	host->context_info.is_new_req = false;
 	host->context_info.is_waiting_last_req = false;
 }
 
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index c46be4402803..4a0752ef6154 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -55,7 +55,6 @@ static int mmc_queue_thread(void *d)
 		req = blk_fetch_request(q);
 		mq->asleep = false;
 		cntx->is_waiting_last_req = false;
-		cntx->is_new_req = false;
 		if (!req) {
 			/*
 			 * Dispatch queue is empty so set flags for
@@ -109,10 +108,6 @@ static void mmc_request_fn(struct request_queue *q)
 
 	cntx = &mq->card->host->context_info;
 
-	if (cntx->is_waiting_last_req) {
-		cntx->is_new_req = true;
-	}
-
 	if (mq->asleep)
 		wake_up_process(mq->thread);
 }
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index d43d26562fae..36af19990683 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -241,11 +241,9 @@ struct mmc_slot {
 
 /**
  * mmc_context_info - synchronization details for mmc context
- * @is_new_req		wake up reason was new request
  * @is_waiting_last_req	mmc context waiting for single running request
  */
 struct mmc_context_info {
-	bool			is_new_req;
 	bool			is_waiting_last_req;
 };
 
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 06/12 v5] mmc: core: kill off the context info
  2017-11-10 10:01 ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
                     ` (4 preceding siblings ...)
  2017-11-10 10:01   ` [PATCH 05/12] mmc: core: do away with is_new_req Linus Walleij
@ 2017-11-10 10:01   ` Linus Walleij
  2017-11-10 10:01   ` [PATCH 07/12 v5] mmc: queue: simplify queue logic Linus Walleij
                     ` (8 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Linus Walleij @ 2017-11-10 10:01 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Avri Altman,
	Adrian Hunter, Linus Walleij

The last member of the context info: is_waiting_last_req is
just assigned values, never checked. Delete that and the whole
context info as a result.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v1->v5:
- Rebasing on the "next" branch in the MMC tree.
---
 drivers/mmc/core/block.c |  2 --
 drivers/mmc/core/bus.c   |  1 -
 drivers/mmc/core/core.c  | 13 -------------
 drivers/mmc/core/core.h  |  2 --
 drivers/mmc/core/queue.c |  9 +--------
 include/linux/mmc/host.h |  9 ---------
 6 files changed, 1 insertion(+), 35 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 5c84175e49be..86ec87c17e71 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -2065,13 +2065,11 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 		default:
 			/* Normal request, just issue it */
 			mmc_blk_issue_rw_rq(mq, req);
-			card->host->context_info.is_waiting_last_req = false;
 			break;
 		}
 	} else {
 		/* No request, flushing the pipeline with NULL */
 		mmc_blk_issue_rw_rq(mq, NULL);
-		card->host->context_info.is_waiting_last_req = false;
 	}
 
 out:
diff --git a/drivers/mmc/core/bus.c b/drivers/mmc/core/bus.c
index a4b49e25fe96..45904a7e87be 100644
--- a/drivers/mmc/core/bus.c
+++ b/drivers/mmc/core/bus.c
@@ -348,7 +348,6 @@ int mmc_add_card(struct mmc_card *card)
 #ifdef CONFIG_DEBUG_FS
 	mmc_add_card_debugfs(card);
 #endif
-	mmc_init_context_info(card->host);
 
 	card->dev.of_node = mmc_of_find_child_device(card->host, 0);
 
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index b1a5059f6cd1..fa86f9a15d29 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -2997,19 +2997,6 @@ void mmc_unregister_pm_notifier(struct mmc_host *host)
 }
 #endif
 
-/**
- * mmc_init_context_info() - init synchronization context
- * @host: mmc host
- *
- * Init struct context_info needed to implement asynchronous
- * request mechanism, used by mmc core, host driver and mmc requests
- * supplier.
- */
-void mmc_init_context_info(struct mmc_host *host)
-{
-	host->context_info.is_waiting_last_req = false;
-}
-
 static int __init mmc_init(void)
 {
 	int ret;
diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h
index e493d9d73fe2..88b852ac8f74 100644
--- a/drivers/mmc/core/core.h
+++ b/drivers/mmc/core/core.h
@@ -92,8 +92,6 @@ void mmc_remove_host_debugfs(struct mmc_host *host);
 void mmc_add_card_debugfs(struct mmc_card *card);
 void mmc_remove_card_debugfs(struct mmc_card *card);
 
-void mmc_init_context_info(struct mmc_host *host);
-
 int mmc_execute_tuning(struct mmc_card *card);
 int mmc_hs200_to_hs400(struct mmc_card *card);
 int mmc_hs400_to_hs200(struct mmc_card *card);
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 4a0752ef6154..2c232ba4e594 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -42,7 +42,6 @@ static int mmc_queue_thread(void *d)
 {
 	struct mmc_queue *mq = d;
 	struct request_queue *q = mq->queue;
-	struct mmc_context_info *cntx = &mq->card->host->context_info;
 
 	current->flags |= PF_MEMALLOC;
 
@@ -54,15 +53,12 @@ static int mmc_queue_thread(void *d)
 		set_current_state(TASK_INTERRUPTIBLE);
 		req = blk_fetch_request(q);
 		mq->asleep = false;
-		cntx->is_waiting_last_req = false;
 		if (!req) {
 			/*
 			 * Dispatch queue is empty so set flags for
 			 * mmc_request_fn() to wake us up.
 			 */
-			if (mq->qcnt)
-				cntx->is_waiting_last_req = true;
-			else
+			if (!mq->qcnt)
 				mq->asleep = true;
 		}
 		spin_unlock_irq(q->queue_lock);
@@ -96,7 +92,6 @@ static void mmc_request_fn(struct request_queue *q)
 {
 	struct mmc_queue *mq = q->queuedata;
 	struct request *req;
-	struct mmc_context_info *cntx;
 
 	if (!mq) {
 		while ((req = blk_fetch_request(q)) != NULL) {
@@ -106,8 +101,6 @@ static void mmc_request_fn(struct request_queue *q)
 		return;
 	}
 
-	cntx = &mq->card->host->context_info;
-
 	if (mq->asleep)
 		wake_up_process(mq->thread);
 }
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index 36af19990683..4b210e9283f6 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -239,14 +239,6 @@ struct mmc_slot {
 	void *handler_priv;
 };
 
-/**
- * mmc_context_info - synchronization details for mmc context
- * @is_waiting_last_req	mmc context waiting for single running request
- */
-struct mmc_context_info {
-	bool			is_waiting_last_req;
-};
-
 struct regulator;
 struct mmc_pwrseq;
 
@@ -423,7 +415,6 @@ struct mmc_host {
 	struct dentry		*debugfs_root;
 
 	struct mmc_async_req	*areq;		/* active async req */
-	struct mmc_context_info	context_info;	/* async synchronization info */
 
 	/* finalization workqueue, handles finalizing requests */
 	struct workqueue_struct	*req_done_wq;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 07/12 v5] mmc: queue: simplify queue logic
  2017-11-10 10:01 ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
                     ` (5 preceding siblings ...)
  2017-11-10 10:01   ` [PATCH 06/12 v5] mmc: core: kill off the context info Linus Walleij
@ 2017-11-10 10:01   ` Linus Walleij
  2017-11-10 10:01   ` [PATCH 08/12 v5] mmc: block: shuffle retry and error handling Linus Walleij
                     ` (7 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Linus Walleij @ 2017-11-10 10:01 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Avri Altman,
	Adrian Hunter, Linus Walleij

The if() statment checking if there is no current or previous
request is now just looking ahead at something that will be
concluded a few lines below. Simplify the logic by moving the
assignment of .asleep.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v1->v5:
- Rebasing on the "next" branch in the MMC tree.
---
 drivers/mmc/core/queue.c | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 2c232ba4e594..023bbddc1a0b 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -53,14 +53,6 @@ static int mmc_queue_thread(void *d)
 		set_current_state(TASK_INTERRUPTIBLE);
 		req = blk_fetch_request(q);
 		mq->asleep = false;
-		if (!req) {
-			/*
-			 * Dispatch queue is empty so set flags for
-			 * mmc_request_fn() to wake us up.
-			 */
-			if (!mq->qcnt)
-				mq->asleep = true;
-		}
 		spin_unlock_irq(q->queue_lock);
 
 		if (req || mq->qcnt) {
@@ -68,6 +60,7 @@ static int mmc_queue_thread(void *d)
 			mmc_blk_issue_rq(mq, req);
 			cond_resched();
 		} else {
+			mq->asleep = true;
 			if (kthread_should_stop()) {
 				set_current_state(TASK_RUNNING);
 				break;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 08/12 v5] mmc: block: shuffle retry and error handling
  2017-11-10 10:01 ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
                     ` (6 preceding siblings ...)
  2017-11-10 10:01   ` [PATCH 07/12 v5] mmc: queue: simplify queue logic Linus Walleij
@ 2017-11-10 10:01   ` Linus Walleij
  2017-11-10 10:01   ` [PATCH 09/12 v5] mmc: queue: stop flushing the pipeline with NULL Linus Walleij
                     ` (6 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Linus Walleij @ 2017-11-10 10:01 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Avri Altman,
	Adrian Hunter, Linus Walleij

Instead of doing retries at the same time as trying to submit new
requests, do the retries when the request is reported as completed
by the driver, in the finalization worker.

This is achieved by letting the core worker call back into the block
layer using a callback in the asynchronous request, ->report_done_status()
that will pass the status back to the block core so it can repeatedly
try to hammer the request using single request, retry etc by calling back
to the core layer using mmc_restart_areq(), which will just kick the
same asynchronous request without waiting for a previous ongoing request.

The beauty of it is that the completion will not complete until the
block layer has had the opportunity to hammer a bit at the card using
a bunch of different approaches that used to be in the while() loop in
mmc_blk_rw_done()

The algorithm for recapture, retry and handle errors is identical to the
one we used to have in mmc_blk_issue_rw_rq(), only augmented to get
called in another path from the core.

We have to add and initialize a pointer back to the struct mmc_queue
from the struct mmc_queue_req to find the queue from the asynchronous
request when reporting the status back to the core.

Other users of the asynchrous request that do not need to retry and
use misc error handling fallbacks will work fine since a NULL
->report_done_status() is just fine. This is currently only done by
the test module.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v4->v5:
- The "disable_multi" and "retry" variables used to be inside
  the do {} loop in the error handler, so now that we restart
  the areq when there are problems, we need to make these
  part of the struct mmc_async_req and reinitialize them to
  false/zero when restarting an asynchronous request.
- Assign mrq->areq also when restarting asynchronous requests:
  the mrq is a quick-turnaround produce and consume object and
  only lives for one request to the host, so this needs to be
  assigned every time we made a new mrq and want to send it
  off to the host.
- Switch "disable_multi" to be a bool as is appropriate.
- Be more careful to assign NULL to host->areq when it is not
  in use, and make sure this only happens at one spot.
- Rebasing on the "next" branch in the MMC tree.
---
 drivers/mmc/core/block.c | 347 ++++++++++++++++++++++++-----------------------
 drivers/mmc/core/core.c  |  46 ++++---
 drivers/mmc/core/core.h  |   1 +
 drivers/mmc/core/queue.c |   2 +
 drivers/mmc/core/queue.h |   1 +
 include/linux/mmc/host.h |   7 +-
 6 files changed, 221 insertions(+), 183 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 86ec87c17e71..2cda2f52058e 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -1575,7 +1575,7 @@ static enum mmc_blk_status mmc_blk_err_check(struct mmc_card *card,
 }
 
 static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
-			      int disable_multi, bool *do_rel_wr_p,
+			      bool disable_multi, bool *do_rel_wr_p,
 			      bool *do_data_tag_p)
 {
 	struct mmc_blk_data *md = mq->blkdata;
@@ -1700,7 +1700,7 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
 
 static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
 			       struct mmc_card *card,
-			       int disable_multi,
+			       bool disable_multi,
 			       struct mmc_queue *mq)
 {
 	u32 readcmd, writecmd;
@@ -1811,198 +1811,213 @@ static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card,
 /**
  * mmc_blk_rw_try_restart() - tries to restart the current async request
  * @mq: the queue with the card and host to restart
- * @req: a new request that want to be started after the current one
+ * @mqrq: the mmc_queue_request containing the areq to be restarted
  */
-static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct request *req,
+static void mmc_blk_rw_try_restart(struct mmc_queue *mq,
 				   struct mmc_queue_req *mqrq)
 {
-	if (!req)
-		return;
+	struct mmc_async_req *areq = &mqrq->areq;
+
+	/* Proceed and try to restart the current async request */
+	mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq);
+	areq->disable_multi = false;
+	areq->retry = 0;
+	mmc_restart_areq(mq->card->host, areq);
+}
+
+static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status status)
+{
+	struct mmc_queue *mq;
+	struct mmc_blk_data *md;
+	struct mmc_card *card;
+	struct mmc_host *host;
+	struct mmc_queue_req *mq_rq;
+	struct mmc_blk_request *brq;
+	struct request *old_req;
+	bool req_pending = true;
+	int type, retune_retry_done = 0;
 
 	/*
-	 * If the card was removed, just cancel everything and return.
+	 * An asynchronous request has been completed and we proceed
+	 * to handle the result of it.
 	 */
-	if (mmc_card_removed(mq->card)) {
-		req->rq_flags |= RQF_QUIET;
-		blk_end_request_all(req, BLK_STS_IOERR);
-		mq->qcnt--; /* FIXME: just set to 0? */
+	mq_rq =	container_of(areq, struct mmc_queue_req, areq);
+	mq = mq_rq->mq;
+	md = mq->blkdata;
+	card = mq->card;
+	host = card->host;
+	brq = &mq_rq->brq;
+	old_req = mmc_queue_req_to_req(mq_rq);
+	type = rq_data_dir(old_req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE;
+
+	switch (status) {
+	case MMC_BLK_SUCCESS:
+	case MMC_BLK_PARTIAL:
+		/*
+		 * A block was successfully transferred.
+		 */
+		mmc_blk_reset_success(md, type);
+		req_pending = blk_end_request(old_req, BLK_STS_OK,
+					      brq->data.bytes_xfered);
+		/*
+		 * If the blk_end_request function returns non-zero even
+		 * though all data has been transferred and no errors
+		 * were returned by the host controller, it's a bug.
+		 */
+		if (status == MMC_BLK_SUCCESS && req_pending) {
+			pr_err("%s BUG rq_tot %d d_xfer %d\n",
+			       __func__, blk_rq_bytes(old_req),
+			       brq->data.bytes_xfered);
+			mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
+			return;
+		}
+		break;
+	case MMC_BLK_CMD_ERR:
+		req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending);
+		if (mmc_blk_reset(md, card->host, type)) {
+			if (req_pending)
+				mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
+			else
+				mq->qcnt--;
+			mmc_blk_rw_try_restart(mq, mq_rq);
+			return;
+		}
+		if (!req_pending) {
+			mq->qcnt--;
+			mmc_blk_rw_try_restart(mq, mq_rq);
+			return;
+		}
+		break;
+	case MMC_BLK_RETRY:
+		retune_retry_done = brq->retune_retry_done;
+		if (areq->retry++ < 5)
+			break;
+		/* Fall through */
+	case MMC_BLK_ABORT:
+		if (!mmc_blk_reset(md, card->host, type))
+			break;
+		mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
+		mmc_blk_rw_try_restart(mq, mq_rq);
+		return;
+	case MMC_BLK_DATA_ERR: {
+		int err;
+			err = mmc_blk_reset(md, card->host, type);
+		if (!err)
+			break;
+		if (err == -ENODEV) {
+			mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
+			mmc_blk_rw_try_restart(mq, mq_rq);
+			return;
+		}
+		/* Fall through */
+	}
+	case MMC_BLK_ECC_ERR:
+		if (brq->data.blocks > 1) {
+			/* Redo read one sector at a time */
+			pr_warn("%s: retrying using single block read\n",
+				old_req->rq_disk->disk_name);
+			areq->disable_multi = true;
+			break;
+		}
+		/*
+		 * After an error, we redo I/O one sector at a
+		 * time, so we only reach here after trying to
+		 * read a single sector.
+		 */
+		req_pending = blk_end_request(old_req, BLK_STS_IOERR,
+					      brq->data.blksz);
+		if (!req_pending) {
+			mq->qcnt--;
+			mmc_blk_rw_try_restart(mq, mq_rq);
+			return;
+		}
+		break;
+	case MMC_BLK_NOMEDIUM:
+		mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
+		mmc_blk_rw_try_restart(mq, mq_rq);
+		return;
+	default:
+		pr_err("%s: Unhandled return value (%d)",
+				old_req->rq_disk->disk_name, status);
+		mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
+		mmc_blk_rw_try_restart(mq, mq_rq);
 		return;
 	}
-	/* Else proceed and try to restart the current async request */
-	mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq);
-	mmc_start_areq(mq->card->host, &mqrq->areq, NULL);
+
+	if (req_pending) {
+		/*
+		 * In case of a incomplete request
+		 * prepare it again and resend.
+		 */
+		mmc_blk_rw_rq_prep(mq_rq, card,
+				areq->disable_multi, mq);
+		mmc_start_areq(card->host, areq, NULL);
+		mq_rq->brq.retune_retry_done = retune_retry_done;
+	} else {
+		/* Else, this request is done */
+		mq->qcnt--;
+	}
 }
 
 static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 {
-	struct mmc_blk_data *md = mq->blkdata;
-	struct mmc_card *card = md->queue.card;
-	struct mmc_blk_request *brq;
-	int disable_multi = 0, retry = 0, type, retune_retry_done = 0;
 	enum mmc_blk_status status;
-	struct mmc_queue_req *mqrq_cur = NULL;
-	struct mmc_queue_req *mq_rq;
-	struct request *old_req;
 	struct mmc_async_req *new_areq;
 	struct mmc_async_req *old_areq;
-	bool req_pending = true;
+	struct mmc_card *card = mq->card;
 
-	if (new_req) {
-		mqrq_cur = req_to_mmc_queue_req(new_req);
+	if (new_req)
 		mq->qcnt++;
-	}
 
 	if (!mq->qcnt)
 		return;
 
-	do {
-		if (new_req) {
-			/*
-			 * When 4KB native sector is enabled, only 8 blocks
-			 * multiple read or write is allowed
-			 */
-			if (mmc_large_sector(card) &&
-				!IS_ALIGNED(blk_rq_sectors(new_req), 8)) {
-				pr_err("%s: Transfer size is not 4KB sector size aligned\n",
-					new_req->rq_disk->disk_name);
-				mmc_blk_rw_cmd_abort(mq, card, new_req, mqrq_cur);
-				return;
-			}
-
-			mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq);
-			new_areq = &mqrq_cur->areq;
-		} else
-			new_areq = NULL;
-
-		old_areq = mmc_start_areq(card->host, new_areq, &status);
-		if (!old_areq) {
-			/*
-			 * We have just put the first request into the pipeline
-			 * and there is nothing more to do until it is
-			 * complete.
-			 */
-			return;
-		}
+	/*
+	 * If the card was removed, just cancel everything and return.
+	 */
+	if (mmc_card_removed(card)) {
+		new_req->rq_flags |= RQF_QUIET;
+		blk_end_request_all(new_req, BLK_STS_IOERR);
+		mq->qcnt--; /* FIXME: just set to 0? */
+		return;
+	}
 
+	if (new_req) {
+		struct mmc_queue_req *mqrq_cur = req_to_mmc_queue_req(new_req);
 		/*
-		 * An asynchronous request has been completed and we proceed
-		 * to handle the result of it.
+		 * When 4KB native sector is enabled, only 8 blocks
+		 * multiple read or write is allowed
 		 */
-		mq_rq =	container_of(old_areq, struct mmc_queue_req, areq);
-		brq = &mq_rq->brq;
-		old_req = mmc_queue_req_to_req(mq_rq);
-		type = rq_data_dir(old_req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE;
-
-		switch (status) {
-		case MMC_BLK_SUCCESS:
-		case MMC_BLK_PARTIAL:
-			/*
-			 * A block was successfully transferred.
-			 */
-			mmc_blk_reset_success(md, type);
-
-			req_pending = blk_end_request(old_req, BLK_STS_OK,
-						      brq->data.bytes_xfered);
-			/*
-			 * If the blk_end_request function returns non-zero even
-			 * though all data has been transferred and no errors
-			 * were returned by the host controller, it's a bug.
-			 */
-			if (status == MMC_BLK_SUCCESS && req_pending) {
-				pr_err("%s BUG rq_tot %d d_xfer %d\n",
-				       __func__, blk_rq_bytes(old_req),
-				       brq->data.bytes_xfered);
-				mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
-				return;
-			}
-			break;
-		case MMC_BLK_CMD_ERR:
-			req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending);
-			if (mmc_blk_reset(md, card->host, type)) {
-				if (req_pending)
-					mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
-				else
-					mq->qcnt--;
-				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
-				return;
-			}
-			if (!req_pending) {
-				mq->qcnt--;
-				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
-				return;
-			}
-			break;
-		case MMC_BLK_RETRY:
-			retune_retry_done = brq->retune_retry_done;
-			if (retry++ < 5)
-				break;
-			/* Fall through */
-		case MMC_BLK_ABORT:
-			if (!mmc_blk_reset(md, card->host, type))
-				break;
-			mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
-			mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
-			return;
-		case MMC_BLK_DATA_ERR: {
-			int err;
-
-			err = mmc_blk_reset(md, card->host, type);
-			if (!err)
-				break;
-			if (err == -ENODEV) {
-				mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
-				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
-				return;
-			}
-			/* Fall through */
-		}
-		case MMC_BLK_ECC_ERR:
-			if (brq->data.blocks > 1) {
-				/* Redo read one sector at a time */
-				pr_warn("%s: retrying using single block read\n",
-					old_req->rq_disk->disk_name);
-				disable_multi = 1;
-				break;
-			}
-			/*
-			 * After an error, we redo I/O one sector at a
-			 * time, so we only reach here after trying to
-			 * read a single sector.
-			 */
-			req_pending = blk_end_request(old_req, BLK_STS_IOERR,
-						      brq->data.blksz);
-			if (!req_pending) {
-				mq->qcnt--;
-				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
-				return;
-			}
-			break;
-		case MMC_BLK_NOMEDIUM:
-			mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
-			mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
-			return;
-		default:
-			pr_err("%s: Unhandled return value (%d)",
-					old_req->rq_disk->disk_name, status);
-			mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
-			mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
+		if (mmc_large_sector(card) &&
+		    !IS_ALIGNED(blk_rq_sectors(new_req), 8)) {
+			pr_err("%s: Transfer size is not 4KB sector size aligned\n",
+			       new_req->rq_disk->disk_name);
+			mmc_blk_rw_cmd_abort(mq, card, new_req, mqrq_cur);
 			return;
 		}
 
-		if (req_pending) {
-			/*
-			 * In case of a incomplete request
-			 * prepare it again and resend.
-			 */
-			mmc_blk_rw_rq_prep(mq_rq, card,
-					disable_multi, mq);
-			mmc_start_areq(card->host,
-					&mq_rq->areq, NULL);
-			mq_rq->brq.retune_retry_done = retune_retry_done;
-		}
-	} while (req_pending);
+		mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq);
+		new_areq = &mqrq_cur->areq;
+		new_areq->report_done_status = mmc_blk_rw_done;
+		new_areq->disable_multi = false;
+		new_areq->retry = 0;
+	} else
+		new_areq = NULL;
 
-	mq->qcnt--;
+	old_areq = mmc_start_areq(card->host, new_areq, &status);
+	if (!old_areq) {
+		/*
+		 * We have just put the first request into the pipeline
+		 * and there is nothing more to do until it is
+		 * complete.
+		 */
+		return;
+	}
+	/*
+	 * FIXME: yes, we just discard the old_areq, it will be
+	 * post-processed when done, in mmc_blk_rw_done(). We clean
+	 * this up in later patches.
+	 */
 }
 
 void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index fa86f9a15d29..f49a2798fb56 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -738,12 +738,29 @@ void mmc_finalize_areq(struct work_struct *work)
 	/* Successfully postprocess the old request at this point */
 	mmc_post_req(host, areq->mrq, 0);
 
-	areq->finalization_status = status;
+	/* Call back with status, this will trigger retry etc if needed */
+	if (areq->report_done_status)
+		areq->report_done_status(areq, status);
+
+	/* This opens the gate for the next request to start on the host */
 	complete(&areq->complete);
 }
 EXPORT_SYMBOL(mmc_finalize_areq);
 
 /**
+ * mmc_restart_areq() - restart an asynchronous request
+ * @host: MMC host to restart the command on
+ * @areq: the asynchronous request to restart
+ */
+int mmc_restart_areq(struct mmc_host *host,
+		     struct mmc_async_req *areq)
+{
+	areq->mrq->areq = areq;
+	return __mmc_start_data_req(host, areq->mrq);
+}
+EXPORT_SYMBOL(mmc_restart_areq);
+
+/**
  *	mmc_start_areq - start an asynchronous request
  *	@host: MMC host to start command
  *	@areq: asynchronous request to start
@@ -763,7 +780,6 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
 				     struct mmc_async_req *areq,
 				     enum mmc_blk_status *ret_stat)
 {
-	enum mmc_blk_status status;
 	int start_err = 0;
 	struct mmc_async_req *previous = host->areq;
 
@@ -774,29 +790,27 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
 	/* Finalize previous request, if there is one */
 	if (previous) {
 		wait_for_completion(&previous->complete);
-		status = previous->finalization_status;
-	} else {
-		status = MMC_BLK_SUCCESS;
+		host->areq = NULL;
 	}
+
+	/* Just always succeed */
 	if (ret_stat)
-		*ret_stat = status;
+		*ret_stat = MMC_BLK_SUCCESS;
 
 	/* Fine so far, start the new request! */
-	if (status == MMC_BLK_SUCCESS && areq) {
+	if (areq) {
 		init_completion(&areq->complete);
 		areq->mrq->areq = areq;
 		start_err = __mmc_start_data_req(host, areq->mrq);
+		/* Cancel a prepared request if it was not started. */
+		if (start_err) {
+			mmc_post_req(host, areq->mrq, -EINVAL);
+			host->areq = NULL;
+		} else {
+			host->areq = areq;
+		}
 	}
 
-	/* Cancel a prepared request if it was not started. */
-	if ((status != MMC_BLK_SUCCESS || start_err) && areq)
-		mmc_post_req(host, areq->mrq, -EINVAL);
-
-	if (status != MMC_BLK_SUCCESS)
-		host->areq = NULL;
-	else
-		host->areq = areq;
-
 	return previous;
 }
 EXPORT_SYMBOL(mmc_start_areq);
diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h
index 88b852ac8f74..1859804ecd80 100644
--- a/drivers/mmc/core/core.h
+++ b/drivers/mmc/core/core.h
@@ -112,6 +112,7 @@ int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq);
 struct mmc_async_req;
 
 void mmc_finalize_areq(struct work_struct *work);
+int mmc_restart_areq(struct mmc_host *host, struct mmc_async_req *areq);
 struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
 				     struct mmc_async_req *areq,
 				     enum mmc_blk_status *ret_stat);
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 023bbddc1a0b..db1fa11d9870 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -145,6 +145,7 @@ static int mmc_init_request(struct request_queue *q, struct request *req,
 	mq_rq->sg = mmc_alloc_sg(host->max_segs, gfp);
 	if (!mq_rq->sg)
 		return -ENOMEM;
+	mq_rq->mq = mq;
 
 	return 0;
 }
@@ -155,6 +156,7 @@ static void mmc_exit_request(struct request_queue *q, struct request *req)
 
 	kfree(mq_rq->sg);
 	mq_rq->sg = NULL;
+	mq_rq->mq = NULL;
 }
 
 static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index 68f68ecd94ea..dce7cedb9d0b 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -52,6 +52,7 @@ struct mmc_queue_req {
 	struct mmc_blk_request	brq;
 	struct scatterlist	*sg;
 	struct mmc_async_req	areq;
+	struct mmc_queue	*mq;
 	enum mmc_drv_op		drv_op;
 	int			drv_op_result;
 	void			*drv_op_data;
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index 4b210e9283f6..f1c362e0765c 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -211,13 +211,18 @@ struct mmc_cqe_ops {
 struct mmc_async_req {
 	/* active mmc request */
 	struct mmc_request	*mrq;
+	bool disable_multi;
+	int retry;
 	/*
 	 * Check error status of completed mmc request.
 	 * Returns 0 if success otherwise non zero.
 	 */
 	enum mmc_blk_status (*err_check)(struct mmc_card *, struct mmc_async_req *);
+	/*
+	 * Report finalization status from the core to e.g. the block layer.
+	 */
+	void (*report_done_status)(struct mmc_async_req *, enum mmc_blk_status);
 	struct work_struct finalization_work;
-	enum mmc_blk_status finalization_status;
 	struct completion complete;
 	struct mmc_host *host;
 };
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 09/12 v5] mmc: queue: stop flushing the pipeline with NULL
  2017-11-10 10:01 ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
                     ` (7 preceding siblings ...)
  2017-11-10 10:01   ` [PATCH 08/12 v5] mmc: block: shuffle retry and error handling Linus Walleij
@ 2017-11-10 10:01   ` Linus Walleij
  2017-11-10 10:01   ` [PATCH 10/12 v5] mmc: queue/block: pass around struct mmc_queue_req*s Linus Walleij
                     ` (5 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Linus Walleij @ 2017-11-10 10:01 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Avri Altman,
	Adrian Hunter, Linus Walleij

Remove all the pipeline flush: i.e. repeatedly sending NULL
down to the core layer to flush out asynchronous requests,
and also sending NULL after "special" commands to achieve the
same flush.

Instead: let the "special" commands wait for any ongoing
asynchronous transfers using the completion, and apart from
that expect the core.c and block.c layers to deal with the
ongoing requests autonomously without any "push" from the
queue.

Add a function in the core to wait for an asynchronous request
to complete.

Update the tests to use the new function prototypes.

This kills off some FIXME's such as gettin rid of the mq->qcnt
queue depth variable that was introduced a while back.

It is a vital step toward multiqueue enablement that we stop
pulling NULL off the end of the request queue to flush the
asynchronous issueing mechanism.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v1->v5:
- Rebasing on the "next" branch in the MMC tree.
---
 drivers/mmc/core/block.c    | 173 ++++++++++++++++----------------------------
 drivers/mmc/core/core.c     |  50 +++++++------
 drivers/mmc/core/core.h     |   6 +-
 drivers/mmc/core/mmc_test.c |  31 ++------
 drivers/mmc/core/queue.c    |  11 ++-
 drivers/mmc/core/queue.h    |   7 --
 6 files changed, 108 insertions(+), 170 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 2cda2f52058e..c7a57006e27f 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -1805,7 +1805,6 @@ static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card,
 	if (mmc_card_removed(card))
 		req->rq_flags |= RQF_QUIET;
 	while (blk_end_request(req, BLK_STS_IOERR, blk_rq_cur_bytes(req)));
-	mq->qcnt--;
 }
 
 /**
@@ -1877,13 +1876,10 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat
 		if (mmc_blk_reset(md, card->host, type)) {
 			if (req_pending)
 				mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
-			else
-				mq->qcnt--;
 			mmc_blk_rw_try_restart(mq, mq_rq);
 			return;
 		}
 		if (!req_pending) {
-			mq->qcnt--;
 			mmc_blk_rw_try_restart(mq, mq_rq);
 			return;
 		}
@@ -1927,7 +1923,6 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat
 		req_pending = blk_end_request(old_req, BLK_STS_IOERR,
 					      brq->data.blksz);
 		if (!req_pending) {
-			mq->qcnt--;
 			mmc_blk_rw_try_restart(mq, mq_rq);
 			return;
 		}
@@ -1951,26 +1946,16 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat
 		 */
 		mmc_blk_rw_rq_prep(mq_rq, card,
 				areq->disable_multi, mq);
-		mmc_start_areq(card->host, areq, NULL);
+		mmc_start_areq(card->host, areq);
 		mq_rq->brq.retune_retry_done = retune_retry_done;
-	} else {
-		/* Else, this request is done */
-		mq->qcnt--;
 	}
 }
 
 static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 {
-	enum mmc_blk_status status;
-	struct mmc_async_req *new_areq;
-	struct mmc_async_req *old_areq;
 	struct mmc_card *card = mq->card;
-
-	if (new_req)
-		mq->qcnt++;
-
-	if (!mq->qcnt)
-		return;
+	struct mmc_queue_req *mqrq_cur = req_to_mmc_queue_req(new_req);
+	struct mmc_async_req *areq = &mqrq_cur->areq;
 
 	/*
 	 * If the card was removed, just cancel everything and return.
@@ -1978,46 +1963,26 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 	if (mmc_card_removed(card)) {
 		new_req->rq_flags |= RQF_QUIET;
 		blk_end_request_all(new_req, BLK_STS_IOERR);
-		mq->qcnt--; /* FIXME: just set to 0? */
 		return;
 	}
 
-	if (new_req) {
-		struct mmc_queue_req *mqrq_cur = req_to_mmc_queue_req(new_req);
-		/*
-		 * When 4KB native sector is enabled, only 8 blocks
-		 * multiple read or write is allowed
-		 */
-		if (mmc_large_sector(card) &&
-		    !IS_ALIGNED(blk_rq_sectors(new_req), 8)) {
-			pr_err("%s: Transfer size is not 4KB sector size aligned\n",
-			       new_req->rq_disk->disk_name);
-			mmc_blk_rw_cmd_abort(mq, card, new_req, mqrq_cur);
-			return;
-		}
-
-		mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq);
-		new_areq = &mqrq_cur->areq;
-		new_areq->report_done_status = mmc_blk_rw_done;
-		new_areq->disable_multi = false;
-		new_areq->retry = 0;
-	} else
-		new_areq = NULL;
-
-	old_areq = mmc_start_areq(card->host, new_areq, &status);
-	if (!old_areq) {
-		/*
-		 * We have just put the first request into the pipeline
-		 * and there is nothing more to do until it is
-		 * complete.
-		 */
-		return;
-	}
 	/*
-	 * FIXME: yes, we just discard the old_areq, it will be
-	 * post-processed when done, in mmc_blk_rw_done(). We clean
-	 * this up in later patches.
+	 * When 4KB native sector is enabled, only 8 blocks
+	 * multiple read or write is allowed
 	 */
+	if (mmc_large_sector(card) &&
+	    !IS_ALIGNED(blk_rq_sectors(new_req), 8)) {
+		pr_err("%s: Transfer size is not 4KB sector size aligned\n",
+		       new_req->rq_disk->disk_name);
+		mmc_blk_rw_cmd_abort(mq, card, new_req, mqrq_cur);
+		return;
+	}
+
+	mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq);
+	areq->disable_multi = false;
+	areq->retry = 0;
+	areq->report_done_status = mmc_blk_rw_done;
+	mmc_start_areq(card->host, areq);
 }
 
 void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
@@ -2026,70 +1991,56 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 	struct mmc_blk_data *md = mq->blkdata;
 	struct mmc_card *card = md->queue.card;
 
-	if (req && !mq->qcnt)
-		/* claim host only for the first request */
-		mmc_get_card(card, NULL);
+	if (!req) {
+		pr_err("%s: tried to issue NULL request\n", __func__);
+		return;
+	}
 
 	ret = mmc_blk_part_switch(card, md->part_type);
 	if (ret) {
-		if (req) {
-			blk_end_request_all(req, BLK_STS_IOERR);
-		}
-		goto out;
+		blk_end_request_all(req, BLK_STS_IOERR);
+		return;
 	}
 
-	if (req) {
-		switch (req_op(req)) {
-		case REQ_OP_DRV_IN:
-		case REQ_OP_DRV_OUT:
-			/*
-			 * Complete ongoing async transfer before issuing
-			 * ioctl()s
-			 */
-			if (mq->qcnt)
-				mmc_blk_issue_rw_rq(mq, NULL);
-			mmc_blk_issue_drv_op(mq, req);
-			break;
-		case REQ_OP_DISCARD:
-			/*
-			 * Complete ongoing async transfer before issuing
-			 * discard.
-			 */
-			if (mq->qcnt)
-				mmc_blk_issue_rw_rq(mq, NULL);
-			mmc_blk_issue_discard_rq(mq, req);
-			break;
-		case REQ_OP_SECURE_ERASE:
-			/*
-			 * Complete ongoing async transfer before issuing
-			 * secure erase.
-			 */
-			if (mq->qcnt)
-				mmc_blk_issue_rw_rq(mq, NULL);
-			mmc_blk_issue_secdiscard_rq(mq, req);
-			break;
-		case REQ_OP_FLUSH:
-			/*
-			 * Complete ongoing async transfer before issuing
-			 * flush.
-			 */
-			if (mq->qcnt)
-				mmc_blk_issue_rw_rq(mq, NULL);
-			mmc_blk_issue_flush(mq, req);
-			break;
-		default:
-			/* Normal request, just issue it */
-			mmc_blk_issue_rw_rq(mq, req);
-			break;
-		}
-	} else {
-		/* No request, flushing the pipeline with NULL */
-		mmc_blk_issue_rw_rq(mq, NULL);
+	switch (req_op(req)) {
+	case REQ_OP_DRV_IN:
+	case REQ_OP_DRV_OUT:
+		/*
+		 * Complete ongoing async transfer before issuing
+		 * ioctl()s
+		 */
+		mmc_wait_for_areq(card->host);
+		mmc_blk_issue_drv_op(mq, req);
+		break;
+	case REQ_OP_DISCARD:
+		/*
+		 * Complete ongoing async transfer before issuing
+		 * discard.
+		 */
+		mmc_wait_for_areq(card->host);
+		mmc_blk_issue_discard_rq(mq, req);
+		break;
+	case REQ_OP_SECURE_ERASE:
+		/*
+		 * Complete ongoing async transfer before issuing
+		 * secure erase.
+		 */
+		mmc_wait_for_areq(card->host);
+		mmc_blk_issue_secdiscard_rq(mq, req);
+		break;
+	case REQ_OP_FLUSH:
+		/*
+		 * Complete ongoing async transfer before issuing
+		 * flush.
+		 */
+		mmc_wait_for_areq(card->host);
+		mmc_blk_issue_flush(mq, req);
+		break;
+	default:
+		/* Normal request, just issue it */
+		mmc_blk_issue_rw_rq(mq, req);
+		break;
 	}
-
-out:
-	if (!mq->qcnt)
-		mmc_put_card(card, NULL);
 }
 
 static inline int mmc_blk_readonly(struct mmc_card *card)
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index f49a2798fb56..42795fdfb730 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -747,6 +747,15 @@ void mmc_finalize_areq(struct work_struct *work)
 }
 EXPORT_SYMBOL(mmc_finalize_areq);
 
+void mmc_wait_for_areq(struct mmc_host *host)
+{
+	if (host->areq) {
+		wait_for_completion(&host->areq->complete);
+		host->areq = NULL;
+	}
+}
+EXPORT_SYMBOL(mmc_wait_for_areq);
+
 /**
  * mmc_restart_areq() - restart an asynchronous request
  * @host: MMC host to restart the command on
@@ -776,16 +785,18 @@ EXPORT_SYMBOL(mmc_restart_areq);
  *	return the completed request. If there is no ongoing request, NULL
  *	is returned without waiting. NULL is not an error condition.
  */
-struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
-				     struct mmc_async_req *areq,
-				     enum mmc_blk_status *ret_stat)
+int mmc_start_areq(struct mmc_host *host,
+		   struct mmc_async_req *areq)
 {
-	int start_err = 0;
 	struct mmc_async_req *previous = host->areq;
+	int ret;
+
+	/* Delete this check when we trust the code */
+	if (!areq)
+		pr_err("%s: NULL asynchronous request!\n", __func__);
 
 	/* Prepare a new request */
-	if (areq)
-		mmc_pre_req(host, areq->mrq);
+	mmc_pre_req(host, areq->mrq);
 
 	/* Finalize previous request, if there is one */
 	if (previous) {
@@ -793,25 +804,20 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
 		host->areq = NULL;
 	}
 
-	/* Just always succeed */
-	if (ret_stat)
-		*ret_stat = MMC_BLK_SUCCESS;
-
 	/* Fine so far, start the new request! */
-	if (areq) {
-		init_completion(&areq->complete);
-		areq->mrq->areq = areq;
-		start_err = __mmc_start_data_req(host, areq->mrq);
-		/* Cancel a prepared request if it was not started. */
-		if (start_err) {
-			mmc_post_req(host, areq->mrq, -EINVAL);
-			host->areq = NULL;
-		} else {
-			host->areq = areq;
-		}
+	init_completion(&areq->complete);
+	areq->mrq->areq = areq;
+	ret = __mmc_start_data_req(host, areq->mrq);
+	/* Cancel a prepared request if it was not started. */
+	if (ret) {
+		mmc_post_req(host, areq->mrq, -EINVAL);
+		host->areq = NULL;
+		pr_err("%s: failed to start request\n", __func__);
+	} else {
+		host->areq = areq;
 	}
 
-	return previous;
+	return ret;
 }
 EXPORT_SYMBOL(mmc_start_areq);
 
diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h
index 1859804ecd80..5b8d0f1147ef 100644
--- a/drivers/mmc/core/core.h
+++ b/drivers/mmc/core/core.h
@@ -113,9 +113,9 @@ struct mmc_async_req;
 
 void mmc_finalize_areq(struct work_struct *work);
 int mmc_restart_areq(struct mmc_host *host, struct mmc_async_req *areq);
-struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
-				     struct mmc_async_req *areq,
-				     enum mmc_blk_status *ret_stat);
+int mmc_start_areq(struct mmc_host *host,
+		   struct mmc_async_req *areq);
+void mmc_wait_for_areq(struct mmc_host *host);
 
 int mmc_erase(struct mmc_card *card, unsigned int from, unsigned int nr,
 		unsigned int arg);
diff --git a/drivers/mmc/core/mmc_test.c b/drivers/mmc/core/mmc_test.c
index 478869805b96..256fdce38449 100644
--- a/drivers/mmc/core/mmc_test.c
+++ b/drivers/mmc/core/mmc_test.c
@@ -839,10 +839,8 @@ static int mmc_test_nonblock_transfer(struct mmc_test_card *test,
 {
 	struct mmc_test_req *rq1, *rq2;
 	struct mmc_test_async_req test_areq[2];
-	struct mmc_async_req *done_areq;
 	struct mmc_async_req *cur_areq = &test_areq[0].areq;
 	struct mmc_async_req *other_areq = &test_areq[1].areq;
-	enum mmc_blk_status status;
 	int i;
 	int ret = RESULT_OK;
 
@@ -864,25 +862,16 @@ static int mmc_test_nonblock_transfer(struct mmc_test_card *test,
 	for (i = 0; i < count; i++) {
 		mmc_test_prepare_mrq(test, cur_areq->mrq, sg, sg_len, dev_addr,
 				     blocks, blksz, write);
-		done_areq = mmc_start_areq(test->card->host, cur_areq, &status);
+		ret = mmc_start_areq(test->card->host, cur_areq);
+		mmc_wait_for_areq(test->card->host);
 
-		if (status != MMC_BLK_SUCCESS || (!done_areq && i > 0)) {
-			ret = RESULT_FAIL;
-			goto err;
-		}
-
-		if (done_areq)
-			mmc_test_req_reset(container_of(done_areq->mrq,
+		mmc_test_req_reset(container_of(cur_areq->mrq,
 						struct mmc_test_req, mrq));
 
 		swap(cur_areq, other_areq);
 		dev_addr += blocks;
 	}
 
-	done_areq = mmc_start_areq(test->card->host, NULL, &status);
-	if (status != MMC_BLK_SUCCESS)
-		ret = RESULT_FAIL;
-
 err:
 	kfree(rq1);
 	kfree(rq2);
@@ -2360,7 +2349,6 @@ static int mmc_test_ongoing_transfer(struct mmc_test_card *test,
 	struct mmc_request *mrq;
 	unsigned long timeout;
 	bool expired = false;
-	enum mmc_blk_status blkstat = MMC_BLK_SUCCESS;
 	int ret = 0, cmd_ret;
 	u32 status = 0;
 	int count = 0;
@@ -2388,11 +2376,8 @@ static int mmc_test_ongoing_transfer(struct mmc_test_card *test,
 
 	/* Start ongoing data request */
 	if (use_areq) {
-		mmc_start_areq(host, &test_areq.areq, &blkstat);
-		if (blkstat != MMC_BLK_SUCCESS) {
-			ret = RESULT_FAIL;
-			goto out_free;
-		}
+		mmc_start_areq(host, &test_areq.areq);
+		mmc_wait_for_areq(host);
 	} else {
 		mmc_wait_for_req(host, mrq);
 	}
@@ -2425,11 +2410,7 @@ static int mmc_test_ongoing_transfer(struct mmc_test_card *test,
 	} while (repeat_cmd && R1_CURRENT_STATE(status) != R1_STATE_TRAN);
 
 	/* Wait for data request to complete */
-	if (use_areq) {
-		mmc_start_areq(host, NULL, &blkstat);
-		if (blkstat != MMC_BLK_SUCCESS)
-			ret = RESULT_FAIL;
-	} else {
+	if (!use_areq) {
 		mmc_wait_for_req_done(test->card->host, mrq);
 	}
 
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index db1fa11d9870..cf43a2d5410d 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -42,6 +42,7 @@ static int mmc_queue_thread(void *d)
 {
 	struct mmc_queue *mq = d;
 	struct request_queue *q = mq->queue;
+	bool claimed_card = false;
 
 	current->flags |= PF_MEMALLOC;
 
@@ -55,7 +56,11 @@ static int mmc_queue_thread(void *d)
 		mq->asleep = false;
 		spin_unlock_irq(q->queue_lock);
 
-		if (req || mq->qcnt) {
+		if (req) {
+			if (!claimed_card) {
+				mmc_get_card(mq->card, NULL);
+				claimed_card = true;
+			}
 			set_current_state(TASK_RUNNING);
 			mmc_blk_issue_rq(mq, req);
 			cond_resched();
@@ -72,6 +77,9 @@ static int mmc_queue_thread(void *d)
 	} while (1);
 	up(&mq->thread_sem);
 
+	if (claimed_card)
+		mmc_put_card(mq->card, NULL);
+
 	return 0;
 }
 
@@ -207,7 +215,6 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 	mq->queue->exit_rq_fn = mmc_exit_request;
 	mq->queue->cmd_size = sizeof(struct mmc_queue_req);
 	mq->queue->queuedata = mq;
-	mq->qcnt = 0;
 	ret = blk_init_allocated_queue(mq->queue);
 	if (ret) {
 		blk_cleanup_queue(mq->queue);
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index dce7cedb9d0b..67ae311b107f 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -67,13 +67,6 @@ struct mmc_queue {
 	bool			asleep;
 	struct mmc_blk_data	*blkdata;
 	struct request_queue	*queue;
-	/*
-	 * FIXME: this counter is not a very reliable way of keeping
-	 * track of how many requests that are ongoing. Switch to just
-	 * letting the block core keep track of requests and per-request
-	 * associated mmc_queue_req data.
-	 */
-	int			qcnt;
 };
 
 extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 10/12 v5] mmc: queue/block: pass around struct mmc_queue_req*s
  2017-11-10 10:01 ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
                     ` (8 preceding siblings ...)
  2017-11-10 10:01   ` [PATCH 09/12 v5] mmc: queue: stop flushing the pipeline with NULL Linus Walleij
@ 2017-11-10 10:01   ` Linus Walleij
  2017-11-10 10:01   ` [PATCH 11/12 v5] mmc: block: issue requests in massive parallel Linus Walleij
                     ` (4 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Linus Walleij @ 2017-11-10 10:01 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Avri Altman,
	Adrian Hunter, Linus Walleij

Instead of passing two pointers around several pointers to
mmc_queue_req, request, mmc_queue, and reassigning to the left and
right, issue mmc_queue_req and dereference the queue and request
from the mmq_queue_req where needed.

The struct mmc_queue_req is the thing that has a lifecycle after
all: this is what we are keeping in our queue, and what the block
layer helps us manager. Augment a bunch of functions to take a
single argument so we can see the trees and not just a big
jungle of arguments.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v1->v5:
- Rebasing on the "next" branch in the MMC tree.
---
 drivers/mmc/core/block.c | 128 ++++++++++++++++++++++++-----------------------
 drivers/mmc/core/block.h |   5 +-
 drivers/mmc/core/queue.c |   2 +-
 3 files changed, 69 insertions(+), 66 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index c7a57006e27f..2cd9fe5a8c9b 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -1208,9 +1208,9 @@ static inline void mmc_blk_reset_success(struct mmc_blk_data *md, int type)
  * processed it with all other requests and then they get issued in this
  * function.
  */
-static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req)
+static void mmc_blk_issue_drv_op(struct mmc_queue_req *mq_rq)
 {
-	struct mmc_queue_req *mq_rq;
+	struct mmc_queue *mq = mq_rq->mq;
 	struct mmc_card *card = mq->card;
 	struct mmc_blk_data *md = mq->blkdata;
 	struct mmc_blk_ioc_data **idata;
@@ -1220,7 +1220,6 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req)
 	int ret;
 	int i;
 
-	mq_rq = req_to_mmc_queue_req(req);
 	rpmb_ioctl = (mq_rq->drv_op == MMC_DRV_OP_IOCTL_RPMB);
 
 	switch (mq_rq->drv_op) {
@@ -1264,12 +1263,14 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req)
 		break;
 	}
 	mq_rq->drv_op_result = ret;
-	blk_end_request_all(req, ret ? BLK_STS_IOERR : BLK_STS_OK);
+	blk_end_request_all(mmc_queue_req_to_req(mq_rq),
+			    ret ? BLK_STS_IOERR : BLK_STS_OK);
 }
 
-static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req)
+static void mmc_blk_issue_discard_rq(struct mmc_queue_req *mq_rq)
 {
-	struct mmc_blk_data *md = mq->blkdata;
+	struct request *req = mmc_queue_req_to_req(mq_rq);
+	struct mmc_blk_data *md = mq_rq->mq->blkdata;
 	struct mmc_card *card = md->queue.card;
 	unsigned int from, nr, arg;
 	int err = 0, type = MMC_BLK_DISCARD;
@@ -1310,10 +1311,10 @@ static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req)
 	blk_end_request(req, status, blk_rq_bytes(req));
 }
 
-static void mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq,
-				       struct request *req)
+static void mmc_blk_issue_secdiscard_rq(struct mmc_queue_req *mq_rq)
 {
-	struct mmc_blk_data *md = mq->blkdata;
+	struct request *req = mmc_queue_req_to_req(mq_rq);
+	struct mmc_blk_data *md = mq_rq->mq->blkdata;
 	struct mmc_card *card = md->queue.card;
 	unsigned int from, nr, arg;
 	int err = 0, type = MMC_BLK_SECDISCARD;
@@ -1380,14 +1381,15 @@ static void mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq,
 	blk_end_request(req, status, blk_rq_bytes(req));
 }
 
-static void mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req)
+static void mmc_blk_issue_flush(struct mmc_queue_req *mq_rq)
 {
-	struct mmc_blk_data *md = mq->blkdata;
+	struct mmc_blk_data *md = mq_rq->mq->blkdata;
 	struct mmc_card *card = md->queue.card;
 	int ret = 0;
 
 	ret = mmc_flush_cache(card);
-	blk_end_request_all(req, ret ? BLK_STS_IOERR : BLK_STS_OK);
+	blk_end_request_all(mmc_queue_req_to_req(mq_rq),
+			    ret ? BLK_STS_IOERR : BLK_STS_OK);
 }
 
 /*
@@ -1698,18 +1700,18 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
 		*do_data_tag_p = do_data_tag;
 }
 
-static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
-			       struct mmc_card *card,
-			       bool disable_multi,
-			       struct mmc_queue *mq)
+static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mq_rq,
+			       bool disable_multi)
 {
 	u32 readcmd, writecmd;
-	struct mmc_blk_request *brq = &mqrq->brq;
-	struct request *req = mmc_queue_req_to_req(mqrq);
+	struct mmc_queue *mq = mq_rq->mq;
+	struct mmc_card *card = mq->card;
+	struct mmc_blk_request *brq = &mq_rq->brq;
+	struct request *req = mmc_queue_req_to_req(mq_rq);
 	struct mmc_blk_data *md = mq->blkdata;
 	bool do_rel_wr, do_data_tag;
 
-	mmc_blk_data_prep(mq, mqrq, disable_multi, &do_rel_wr, &do_data_tag);
+	mmc_blk_data_prep(mq, mq_rq, disable_multi, &do_rel_wr, &do_data_tag);
 
 	brq->mrq.cmd = &brq->cmd;
 	brq->mrq.areq = NULL;
@@ -1764,9 +1766,9 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
 		brq->mrq.sbc = &brq->sbc;
 	}
 
-	mqrq->areq.err_check = mmc_blk_err_check;
-	mqrq->areq.host = card->host;
-	INIT_WORK(&mqrq->areq.finalization_work, mmc_finalize_areq);
+	mq_rq->areq.err_check = mmc_blk_err_check;
+	mq_rq->areq.host = card->host;
+	INIT_WORK(&mq_rq->areq.finalization_work, mmc_finalize_areq);
 }
 
 static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card,
@@ -1798,10 +1800,12 @@ static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card,
 	return req_pending;
 }
 
-static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card,
-				 struct request *req,
-				 struct mmc_queue_req *mqrq)
+static void mmc_blk_rw_cmd_abort(struct mmc_queue_req *mq_rq)
 {
+	struct mmc_queue *mq = mq_rq->mq;
+	struct mmc_card *card = mq->card;
+	struct request *req = mmc_queue_req_to_req(mq_rq);
+
 	if (mmc_card_removed(card))
 		req->rq_flags |= RQF_QUIET;
 	while (blk_end_request(req, BLK_STS_IOERR, blk_rq_cur_bytes(req)));
@@ -1809,16 +1813,15 @@ static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card,
 
 /**
  * mmc_blk_rw_try_restart() - tries to restart the current async request
- * @mq: the queue with the card and host to restart
- * @mqrq: the mmc_queue_request containing the areq to be restarted
+ * @mq_rq: the mmc_queue_request containing the areq to be restarted
  */
-static void mmc_blk_rw_try_restart(struct mmc_queue *mq,
-				   struct mmc_queue_req *mqrq)
+static void mmc_blk_rw_try_restart(struct mmc_queue_req *mq_rq)
 {
-	struct mmc_async_req *areq = &mqrq->areq;
+	struct mmc_async_req *areq = &mq_rq->areq;
+	struct mmc_queue *mq = mq_rq->mq;
 
 	/* Proceed and try to restart the current async request */
-	mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq);
+	mmc_blk_rw_rq_prep(mq_rq, 0);
 	areq->disable_multi = false;
 	areq->retry = 0;
 	mmc_restart_areq(mq->card->host, areq);
@@ -1867,7 +1870,7 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat
 			pr_err("%s BUG rq_tot %d d_xfer %d\n",
 			       __func__, blk_rq_bytes(old_req),
 			       brq->data.bytes_xfered);
-			mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
+			mmc_blk_rw_cmd_abort(mq_rq);
 			return;
 		}
 		break;
@@ -1875,12 +1878,12 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat
 		req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending);
 		if (mmc_blk_reset(md, card->host, type)) {
 			if (req_pending)
-				mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
-			mmc_blk_rw_try_restart(mq, mq_rq);
+				mmc_blk_rw_cmd_abort(mq_rq);
+			mmc_blk_rw_try_restart(mq_rq);
 			return;
 		}
 		if (!req_pending) {
-			mmc_blk_rw_try_restart(mq, mq_rq);
+			mmc_blk_rw_try_restart(mq_rq);
 			return;
 		}
 		break;
@@ -1892,8 +1895,8 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat
 	case MMC_BLK_ABORT:
 		if (!mmc_blk_reset(md, card->host, type))
 			break;
-		mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
-		mmc_blk_rw_try_restart(mq, mq_rq);
+		mmc_blk_rw_cmd_abort(mq_rq);
+		mmc_blk_rw_try_restart(mq_rq);
 		return;
 	case MMC_BLK_DATA_ERR: {
 		int err;
@@ -1901,8 +1904,8 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat
 		if (!err)
 			break;
 		if (err == -ENODEV) {
-			mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
-			mmc_blk_rw_try_restart(mq, mq_rq);
+			mmc_blk_rw_cmd_abort(mq_rq);
+			mmc_blk_rw_try_restart(mq_rq);
 			return;
 		}
 		/* Fall through */
@@ -1923,19 +1926,19 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat
 		req_pending = blk_end_request(old_req, BLK_STS_IOERR,
 					      brq->data.blksz);
 		if (!req_pending) {
-			mmc_blk_rw_try_restart(mq, mq_rq);
+			mmc_blk_rw_try_restart(mq_rq);
 			return;
 		}
 		break;
 	case MMC_BLK_NOMEDIUM:
-		mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
-		mmc_blk_rw_try_restart(mq, mq_rq);
+		mmc_blk_rw_cmd_abort(mq_rq);
+		mmc_blk_rw_try_restart(mq_rq);
 		return;
 	default:
 		pr_err("%s: Unhandled return value (%d)",
 				old_req->rq_disk->disk_name, status);
-		mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
-		mmc_blk_rw_try_restart(mq, mq_rq);
+		mmc_blk_rw_cmd_abort(mq_rq);
+		mmc_blk_rw_try_restart(mq_rq);
 		return;
 	}
 
@@ -1944,25 +1947,25 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat
 		 * In case of a incomplete request
 		 * prepare it again and resend.
 		 */
-		mmc_blk_rw_rq_prep(mq_rq, card,
-				areq->disable_multi, mq);
+		mmc_blk_rw_rq_prep(mq_rq, areq->disable_multi);
 		mmc_start_areq(card->host, areq);
 		mq_rq->brq.retune_retry_done = retune_retry_done;
 	}
 }
 
-static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
+static void mmc_blk_issue_rw_rq(struct mmc_queue_req *mq_rq)
 {
+	struct request *req = mmc_queue_req_to_req(mq_rq);
+	struct mmc_queue *mq = mq_rq->mq;
 	struct mmc_card *card = mq->card;
-	struct mmc_queue_req *mqrq_cur = req_to_mmc_queue_req(new_req);
-	struct mmc_async_req *areq = &mqrq_cur->areq;
+	struct mmc_async_req *areq = &mq_rq->areq;
 
 	/*
 	 * If the card was removed, just cancel everything and return.
 	 */
 	if (mmc_card_removed(card)) {
-		new_req->rq_flags |= RQF_QUIET;
-		blk_end_request_all(new_req, BLK_STS_IOERR);
+		req->rq_flags |= RQF_QUIET;
+		blk_end_request_all(req, BLK_STS_IOERR);
 		return;
 	}
 
@@ -1971,24 +1974,25 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 	 * multiple read or write is allowed
 	 */
 	if (mmc_large_sector(card) &&
-	    !IS_ALIGNED(blk_rq_sectors(new_req), 8)) {
+	    !IS_ALIGNED(blk_rq_sectors(req), 8)) {
 		pr_err("%s: Transfer size is not 4KB sector size aligned\n",
-		       new_req->rq_disk->disk_name);
-		mmc_blk_rw_cmd_abort(mq, card, new_req, mqrq_cur);
+		       req->rq_disk->disk_name);
+		mmc_blk_rw_cmd_abort(mq_rq);
 		return;
 	}
 
-	mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq);
+	mmc_blk_rw_rq_prep(mq_rq, 0);
 	areq->disable_multi = false;
 	areq->retry = 0;
 	areq->report_done_status = mmc_blk_rw_done;
 	mmc_start_areq(card->host, areq);
 }
 
-void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
+void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq)
 {
 	int ret;
-	struct mmc_blk_data *md = mq->blkdata;
+	struct request *req = mmc_queue_req_to_req(mq_rq);
+	struct mmc_blk_data *md = mq_rq->mq->blkdata;
 	struct mmc_card *card = md->queue.card;
 
 	if (!req) {
@@ -2010,7 +2014,7 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 		 * ioctl()s
 		 */
 		mmc_wait_for_areq(card->host);
-		mmc_blk_issue_drv_op(mq, req);
+		mmc_blk_issue_drv_op(mq_rq);
 		break;
 	case REQ_OP_DISCARD:
 		/*
@@ -2018,7 +2022,7 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 		 * discard.
 		 */
 		mmc_wait_for_areq(card->host);
-		mmc_blk_issue_discard_rq(mq, req);
+		mmc_blk_issue_discard_rq(mq_rq);
 		break;
 	case REQ_OP_SECURE_ERASE:
 		/*
@@ -2026,7 +2030,7 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 		 * secure erase.
 		 */
 		mmc_wait_for_areq(card->host);
-		mmc_blk_issue_secdiscard_rq(mq, req);
+		mmc_blk_issue_secdiscard_rq(mq_rq);
 		break;
 	case REQ_OP_FLUSH:
 		/*
@@ -2034,11 +2038,11 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 		 * flush.
 		 */
 		mmc_wait_for_areq(card->host);
-		mmc_blk_issue_flush(mq, req);
+		mmc_blk_issue_flush(mq_rq);
 		break;
 	default:
 		/* Normal request, just issue it */
-		mmc_blk_issue_rw_rq(mq, req);
+		mmc_blk_issue_rw_rq(mq_rq);
 		break;
 	}
 }
diff --git a/drivers/mmc/core/block.h b/drivers/mmc/core/block.h
index 860ca7c8df86..bbc1c8029b3b 100644
--- a/drivers/mmc/core/block.h
+++ b/drivers/mmc/core/block.h
@@ -1,9 +1,8 @@
 #ifndef _MMC_CORE_BLOCK_H
 #define _MMC_CORE_BLOCK_H
 
-struct mmc_queue;
-struct request;
+struct mmc_queue_req;
 
-void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req);
+void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq);
 
 #endif
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index cf43a2d5410d..5511e323db31 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -62,7 +62,7 @@ static int mmc_queue_thread(void *d)
 				claimed_card = true;
 			}
 			set_current_state(TASK_RUNNING);
-			mmc_blk_issue_rq(mq, req);
+			mmc_blk_issue_rq(req_to_mmc_queue_req(req));
 			cond_resched();
 		} else {
 			mq->asleep = true;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 11/12 v5] mmc: block: issue requests in massive parallel
  2017-11-10 10:01 ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
                     ` (9 preceding siblings ...)
  2017-11-10 10:01   ` [PATCH 10/12 v5] mmc: queue/block: pass around struct mmc_queue_req*s Linus Walleij
@ 2017-11-10 10:01   ` Linus Walleij
  2017-11-10 10:01   ` [PATCH 12/12 v5] mmc: switch MMC/SD to use blk-mq multiqueueing v5 Linus Walleij
                     ` (3 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Linus Walleij @ 2017-11-10 10:01 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Avri Altman,
	Adrian Hunter, Linus Walleij

This makes a crucial change to the issueing mechanism for the
MMC requests:

Before commit "mmc: core: move the asynchronous post-processing"
some parallelism on the read/write requests was achieved by
speculatively postprocessing a request and re-preprocess and
re-issue the request if something went wrong, which we discover
later when checking for an error.

This is kind of ugly. Instead we need a mechanism like here:

We issue requests, and when they come back from the hardware,
we know if they finished successfully or not. If the request
was successful, we complete the asynchronous request and let a
new request immediately start on the hardware. If, and only if,
it returned an error from the hardware we go down the error
path.

This is achieved by splitting the work path from the hardware
in two: a successful path ending up calling down to
mmc_blk_rw_done() and completing quickly, and an errorpath
calling down to mmc_blk_rw_done_error().

This has a profound effect: we reintroduce the parallelism on
the successful path as mmc_post_req() can now be called in
while the next request is in transit (just like prior to
commit "mmc: core: move the asynchronous post-processing")
and blk_end_request() is called while the next request is
already on the hardware.

The latter has the profound effect of issuing a new request
again so that we actually may have three requests
in transit at the same time: one on the hardware, one being
prepared (such as DMA flushing) and one being prepared for
issuing next by the block layer. This shows up when we
transit to multiqueue, where this can be exploited.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v4->v5:
- Fixes on the errorpath: when a request reports error back,
  keep the areq on the host as it is not yet finished
  (no assigning NULL to host->areq), do not postprocess
  the request or complete it. This will happen eventually
  when the request succeeds.
- When restarting the command, use mmc_restart_areq() as
  could be expected.
- Augmend the .report_done_status() callback to return a
  bool indicating whether the areq is now finished or not,
  to handle the error case where we eventually give up on
  the request and have returned an error to the block
  layer.
- Make sure to post-process the request on the error path
  and pre-process it again when resending an asynchronous
  request. This satisfies the host's semantic expectation
  that every request will be in pre->req->post sequence
  even if there are errors.
- To assure the ordering of pre/post-processing, we need
  to post-process any prepared request with -EINVAL if
  there is an error, then re-preprocess it again after
  error recovery. To this end a helper pointer in
  host->pending_areq is added so the error path can
  act on this.
- Rebasing on the "next" branch in the MMC tree.
---
 drivers/mmc/core/block.c | 98 ++++++++++++++++++++++++++++++++----------------
 drivers/mmc/core/core.c  | 58 +++++++++++++++++++++++-----
 include/linux/mmc/host.h |  4 +-
 3 files changed, 117 insertions(+), 43 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 2cd9fe5a8c9b..e3ae7241b2eb 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -1827,7 +1827,8 @@ static void mmc_blk_rw_try_restart(struct mmc_queue_req *mq_rq)
 	mmc_restart_areq(mq->card->host, areq);
 }
 
-static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status status)
+static bool mmc_blk_rw_done_error(struct mmc_async_req *areq,
+				  enum mmc_blk_status status)
 {
 	struct mmc_queue *mq;
 	struct mmc_blk_data *md;
@@ -1835,7 +1836,7 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat
 	struct mmc_host *host;
 	struct mmc_queue_req *mq_rq;
 	struct mmc_blk_request *brq;
-	struct request *old_req;
+	struct request *req;
 	bool req_pending = true;
 	int type, retune_retry_done = 0;
 
@@ -1849,42 +1850,27 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat
 	card = mq->card;
 	host = card->host;
 	brq = &mq_rq->brq;
-	old_req = mmc_queue_req_to_req(mq_rq);
-	type = rq_data_dir(old_req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE;
+	req = mmc_queue_req_to_req(mq_rq);
+	type = rq_data_dir(req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE;
 
 	switch (status) {
-	case MMC_BLK_SUCCESS:
 	case MMC_BLK_PARTIAL:
-		/*
-		 * A block was successfully transferred.
-		 */
+		/* This should trigger a retransmit */
 		mmc_blk_reset_success(md, type);
-		req_pending = blk_end_request(old_req, BLK_STS_OK,
+		req_pending = blk_end_request(req, BLK_STS_OK,
 					      brq->data.bytes_xfered);
-		/*
-		 * If the blk_end_request function returns non-zero even
-		 * though all data has been transferred and no errors
-		 * were returned by the host controller, it's a bug.
-		 */
-		if (status == MMC_BLK_SUCCESS && req_pending) {
-			pr_err("%s BUG rq_tot %d d_xfer %d\n",
-			       __func__, blk_rq_bytes(old_req),
-			       brq->data.bytes_xfered);
-			mmc_blk_rw_cmd_abort(mq_rq);
-			return;
-		}
 		break;
 	case MMC_BLK_CMD_ERR:
-		req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending);
+		req_pending = mmc_blk_rw_cmd_err(md, card, brq, req, req_pending);
 		if (mmc_blk_reset(md, card->host, type)) {
 			if (req_pending)
 				mmc_blk_rw_cmd_abort(mq_rq);
 			mmc_blk_rw_try_restart(mq_rq);
-			return;
+			return false;
 		}
 		if (!req_pending) {
 			mmc_blk_rw_try_restart(mq_rq);
-			return;
+			return false;
 		}
 		break;
 	case MMC_BLK_RETRY:
@@ -1897,7 +1883,7 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat
 			break;
 		mmc_blk_rw_cmd_abort(mq_rq);
 		mmc_blk_rw_try_restart(mq_rq);
-		return;
+		return false;
 	case MMC_BLK_DATA_ERR: {
 		int err;
 			err = mmc_blk_reset(md, card->host, type);
@@ -1906,7 +1892,7 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat
 		if (err == -ENODEV) {
 			mmc_blk_rw_cmd_abort(mq_rq);
 			mmc_blk_rw_try_restart(mq_rq);
-			return;
+			return false;
 		}
 		/* Fall through */
 	}
@@ -1914,7 +1900,7 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat
 		if (brq->data.blocks > 1) {
 			/* Redo read one sector at a time */
 			pr_warn("%s: retrying using single block read\n",
-				old_req->rq_disk->disk_name);
+				req->rq_disk->disk_name);
 			areq->disable_multi = true;
 			break;
 		}
@@ -1923,23 +1909,23 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat
 		 * time, so we only reach here after trying to
 		 * read a single sector.
 		 */
-		req_pending = blk_end_request(old_req, BLK_STS_IOERR,
+		req_pending = blk_end_request(req, BLK_STS_IOERR,
 					      brq->data.blksz);
 		if (!req_pending) {
 			mmc_blk_rw_try_restart(mq_rq);
-			return;
+			return false;
 		}
 		break;
 	case MMC_BLK_NOMEDIUM:
 		mmc_blk_rw_cmd_abort(mq_rq);
 		mmc_blk_rw_try_restart(mq_rq);
-		return;
+		return false;
 	default:
 		pr_err("%s: Unhandled return value (%d)",
-				old_req->rq_disk->disk_name, status);
+		       req->rq_disk->disk_name, status);
 		mmc_blk_rw_cmd_abort(mq_rq);
 		mmc_blk_rw_try_restart(mq_rq);
-		return;
+		return false;
 	}
 
 	if (req_pending) {
@@ -1948,9 +1934,55 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat
 		 * prepare it again and resend.
 		 */
 		mmc_blk_rw_rq_prep(mq_rq, areq->disable_multi);
-		mmc_start_areq(card->host, areq);
 		mq_rq->brq.retune_retry_done = retune_retry_done;
+		mmc_restart_areq(card->host, areq);
+		return false;
+	}
+
+	return true;
+}
+
+static bool mmc_blk_rw_done(struct mmc_async_req *areq,
+			    enum mmc_blk_status status)
+{
+	struct mmc_queue_req *mq_rq;
+	struct request *req;
+	struct mmc_blk_request *brq;
+	struct mmc_queue *mq;
+	struct mmc_blk_data *md;
+	bool req_pending;
+	int type;
+
+	/*
+	 * Anything other than success or partial transfers are errors.
+	 */
+	if (status != MMC_BLK_SUCCESS) {
+		return mmc_blk_rw_done_error(areq, status);
+	}
+
+	/* The quick path if the request was successful */
+	mq_rq =	container_of(areq, struct mmc_queue_req, areq);
+	brq = &mq_rq->brq;
+	mq = mq_rq->mq;
+	md = mq->blkdata;
+	req = mmc_queue_req_to_req(mq_rq);
+	type = rq_data_dir(req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE;
+
+	mmc_blk_reset_success(md, type);
+	req_pending = blk_end_request(req, BLK_STS_OK,
+				      brq->data.bytes_xfered);
+	/*
+	 * If the blk_end_request function returns non-zero even
+	 * though all data has been transferred and no errors
+	 * were returned by the host controller, it's a bug.
+	 */
+	if (req_pending) {
+		pr_err("%s BUG rq_tot %d d_xfer %d\n",
+		       __func__, blk_rq_bytes(req),
+		       brq->data.bytes_xfered);
+		mmc_blk_rw_cmd_abort(mq_rq);
 	}
+	return true;
 }
 
 static void mmc_blk_issue_rw_rq(struct mmc_queue_req *mq_rq)
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 42795fdfb730..95e8e9206f04 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -735,15 +735,52 @@ void mmc_finalize_areq(struct work_struct *work)
 		mmc_start_bkops(host->card, true);
 	}
 
-	/* Successfully postprocess the old request at this point */
-	mmc_post_req(host, areq->mrq, 0);
-
-	/* Call back with status, this will trigger retry etc if needed */
-	if (areq->report_done_status)
-		areq->report_done_status(areq, status);
-
-	/* This opens the gate for the next request to start on the host */
-	complete(&areq->complete);
+	/*
+	 * Here we postprocess the request differently depending on if
+	 * we go on the success path or error path. The success path will
+	 * immediately let new requests hit the host, whereas the error
+	 * path will hold off new requests until we have retried and
+	 * succeeded or failed the current asynchronous request.
+	 */
+	if (status == MMC_BLK_SUCCESS) {
+		/*
+		 * This immediately opens the gate for the next request
+		 * to start on the host while we perform post-processing
+		 * and report back to the block layer.
+		 */
+		host->areq = NULL;
+		complete(&areq->complete);
+		mmc_post_req(host, areq->mrq, 0);
+		if (areq->report_done_status)
+			areq->report_done_status(areq, MMC_BLK_SUCCESS);
+	} else {
+		/*
+		 * Post-process this request. Then, if
+		 * another request was already prepared, back that out
+		 * so we can handle the errors without anything prepared
+		 * on the host.
+		 */
+		if (host->areq_pending)
+			mmc_post_req(host, host->areq_pending->mrq, -EINVAL);
+		/*
+		 * Call back with error status, this will trigger retry
+		 * etc if needed
+		 */
+		if (areq->report_done_status) {
+			if (areq->report_done_status(areq, status)) {
+				/*
+				 * This happens when we finally give up after
+				 * a few retries or on unrecoverable errors.
+				 */
+				mmc_post_req(host, areq->mrq, 0);
+				host->areq = NULL;
+				/* Re-prepare the next request */
+				if (host->areq_pending)
+					mmc_pre_req(host, host->areq_pending->mrq);
+				complete(&areq->complete);
+			}
+		}
+	}
 }
 EXPORT_SYMBOL(mmc_finalize_areq);
 
@@ -765,6 +802,7 @@ int mmc_restart_areq(struct mmc_host *host,
 		     struct mmc_async_req *areq)
 {
 	areq->mrq->areq = areq;
+	mmc_pre_req(host, areq->mrq);
 	return __mmc_start_data_req(host, areq->mrq);
 }
 EXPORT_SYMBOL(mmc_restart_areq);
@@ -797,6 +835,7 @@ int mmc_start_areq(struct mmc_host *host,
 
 	/* Prepare a new request */
 	mmc_pre_req(host, areq->mrq);
+	host->areq_pending = areq;
 
 	/* Finalize previous request, if there is one */
 	if (previous) {
@@ -805,6 +844,7 @@ int mmc_start_areq(struct mmc_host *host,
 	}
 
 	/* Fine so far, start the new request! */
+	host->areq_pending = NULL;
 	init_completion(&areq->complete);
 	areq->mrq->areq = areq;
 	ret = __mmc_start_data_req(host, areq->mrq);
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index f1c362e0765c..985bc479c8a8 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -220,8 +220,9 @@ struct mmc_async_req {
 	enum mmc_blk_status (*err_check)(struct mmc_card *, struct mmc_async_req *);
 	/*
 	 * Report finalization status from the core to e.g. the block layer.
+	 * Returns true if the request is now finished.
 	 */
-	void (*report_done_status)(struct mmc_async_req *, enum mmc_blk_status);
+	bool (*report_done_status)(struct mmc_async_req *, enum mmc_blk_status);
 	struct work_struct finalization_work;
 	struct completion complete;
 	struct mmc_host *host;
@@ -420,6 +421,7 @@ struct mmc_host {
 	struct dentry		*debugfs_root;
 
 	struct mmc_async_req	*areq;		/* active async req */
+	struct mmc_async_req	*areq_pending;	/* prepared but not issued async req */
 
 	/* finalization workqueue, handles finalizing requests */
 	struct workqueue_struct	*req_done_wq;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 12/12 v5] mmc: switch MMC/SD to use blk-mq multiqueueing v5
  2017-11-10 10:01 ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
                     ` (10 preceding siblings ...)
  2017-11-10 10:01   ` [PATCH 11/12 v5] mmc: block: issue requests in massive parallel Linus Walleij
@ 2017-11-10 10:01   ` Linus Walleij
  2017-11-10 13:39   ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
                     ` (2 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Linus Walleij @ 2017-11-10 10:01 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Avri Altman,
	Adrian Hunter, Linus Walleij

This switches the MMC/SD stack to use the multiqueue block
layer interface.

We kill off the kthread that was just calling blk_fetch_request()
and let blk-mq drive all traffic, nice, that is how it should work.

Due to having switched the submission mechanics around so that
the completion of requests is now triggered from the host
callbacks, we manage to keep the same performance for linear
reads/writes as we have for the old block layer.

The open questions from earlier patch series have been
addressed:

- mmc_[get|put]_card() is now issued across requests from
  .queue_rq() to .complete() using Adrians nifty context lock.
  This means that the block layer does not compete with itself
  on getting access to the host, and we can let other users of
  the host come in. (For SDIO and mixed-mode cards.)

- Partial reads are handled by open coding calls to
  blk_update_request() as advised by Christoph.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v4->v5:
- Rebase on the other changes including improved error
  handling.
- Use quiesce and unquiesce on the queue in the
  suspend/resume cycle.
---
 drivers/mmc/core/block.c |  92 ++++++++++--------
 drivers/mmc/core/queue.c | 237 ++++++++++++++++++++---------------------------
 drivers/mmc/core/queue.h |   8 +-
 3 files changed, 156 insertions(+), 181 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index e3ae7241b2eb..9fa3bfa3b4f8 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -28,6 +28,7 @@
 #include <linux/hdreg.h>
 #include <linux/kdev_t.h>
 #include <linux/blkdev.h>
+#include <linux/blk-mq.h>
 #include <linux/cdev.h>
 #include <linux/mutex.h>
 #include <linux/scatterlist.h>
@@ -93,7 +94,6 @@ static DEFINE_IDA(mmc_rpmb_ida);
  * There is one mmc_blk_data per slot.
  */
 struct mmc_blk_data {
-	spinlock_t	lock;
 	struct device	*parent;
 	struct gendisk	*disk;
 	struct mmc_queue queue;
@@ -1204,6 +1204,23 @@ static inline void mmc_blk_reset_success(struct mmc_blk_data *md, int type)
 }
 
 /*
+ * This reports status back to the block layer for a finished request.
+ */
+static void mmc_blk_complete(struct mmc_queue_req *mq_rq,
+			     blk_status_t status)
+{
+	struct request *req = mmc_queue_req_to_req(mq_rq);
+
+	/*
+	 * We are done with I/O, so this call will invoke .complete() and
+	 * release the host lock.
+	 */
+	blk_mq_complete_request(req);
+	/* Then we report the request back to the block layer */
+	blk_mq_end_request(req, status);
+}
+
+/*
  * The non-block commands come back from the block layer after it queued it and
  * processed it with all other requests and then they get issued in this
  * function.
@@ -1262,9 +1279,9 @@ static void mmc_blk_issue_drv_op(struct mmc_queue_req *mq_rq)
 		ret = -EINVAL;
 		break;
 	}
+
 	mq_rq->drv_op_result = ret;
-	blk_end_request_all(mmc_queue_req_to_req(mq_rq),
-			    ret ? BLK_STS_IOERR : BLK_STS_OK);
+	mmc_blk_complete(mq_rq, ret ? BLK_STS_IOERR : BLK_STS_OK);
 }
 
 static void mmc_blk_issue_discard_rq(struct mmc_queue_req *mq_rq)
@@ -1308,7 +1325,7 @@ static void mmc_blk_issue_discard_rq(struct mmc_queue_req *mq_rq)
 	else
 		mmc_blk_reset_success(md, type);
 fail:
-	blk_end_request(req, status, blk_rq_bytes(req));
+	mmc_blk_complete(mq_rq, status);
 }
 
 static void mmc_blk_issue_secdiscard_rq(struct mmc_queue_req *mq_rq)
@@ -1378,7 +1395,7 @@ static void mmc_blk_issue_secdiscard_rq(struct mmc_queue_req *mq_rq)
 	if (!err)
 		mmc_blk_reset_success(md, type);
 out:
-	blk_end_request(req, status, blk_rq_bytes(req));
+	mmc_blk_complete(mq_rq, status);
 }
 
 static void mmc_blk_issue_flush(struct mmc_queue_req *mq_rq)
@@ -1388,8 +1405,13 @@ static void mmc_blk_issue_flush(struct mmc_queue_req *mq_rq)
 	int ret = 0;
 
 	ret = mmc_flush_cache(card);
-	blk_end_request_all(mmc_queue_req_to_req(mq_rq),
-			    ret ? BLK_STS_IOERR : BLK_STS_OK);
+	/*
+	 * NOTE: this used to call blk_end_request_all() for both
+	 * cases in the old block layer to flush all queued
+	 * transactions. I am not sure it was even correct to
+	 * do that for the success case.
+	 */
+	mmc_blk_complete(mq_rq, ret ? BLK_STS_IOERR : BLK_STS_OK);
 }
 
 /*
@@ -1768,7 +1790,6 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mq_rq,
 
 	mq_rq->areq.err_check = mmc_blk_err_check;
 	mq_rq->areq.host = card->host;
-	INIT_WORK(&mq_rq->areq.finalization_work, mmc_finalize_areq);
 }
 
 static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card,
@@ -1792,10 +1813,13 @@ static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card,
 		err = mmc_sd_num_wr_blocks(card, &blocks);
 		if (err)
 			req_pending = old_req_pending;
-		else
-			req_pending = blk_end_request(req, BLK_STS_OK, blocks << 9);
+		else {
+			req_pending = blk_update_request(req, BLK_STS_OK,
+							 blocks << 9);
+		}
 	} else {
-		req_pending = blk_end_request(req, BLK_STS_OK, brq->data.bytes_xfered);
+		req_pending = blk_update_request(req, BLK_STS_OK,
+						 brq->data.bytes_xfered);
 	}
 	return req_pending;
 }
@@ -1808,7 +1832,7 @@ static void mmc_blk_rw_cmd_abort(struct mmc_queue_req *mq_rq)
 
 	if (mmc_card_removed(card))
 		req->rq_flags |= RQF_QUIET;
-	while (blk_end_request(req, BLK_STS_IOERR, blk_rq_cur_bytes(req)));
+	mmc_blk_complete(mq_rq, BLK_STS_IOERR);
 }
 
 /**
@@ -1857,8 +1881,8 @@ static bool mmc_blk_rw_done_error(struct mmc_async_req *areq,
 	case MMC_BLK_PARTIAL:
 		/* This should trigger a retransmit */
 		mmc_blk_reset_success(md, type);
-		req_pending = blk_end_request(req, BLK_STS_OK,
-					      brq->data.bytes_xfered);
+		req_pending = blk_update_request(req, BLK_STS_OK,
+						 brq->data.bytes_xfered);
 		break;
 	case MMC_BLK_CMD_ERR:
 		req_pending = mmc_blk_rw_cmd_err(md, card, brq, req, req_pending);
@@ -1909,11 +1933,13 @@ static bool mmc_blk_rw_done_error(struct mmc_async_req *areq,
 		 * time, so we only reach here after trying to
 		 * read a single sector.
 		 */
-		req_pending = blk_end_request(req, BLK_STS_IOERR,
-					      brq->data.blksz);
+		req_pending = blk_update_request(req, BLK_STS_IOERR,
+						 brq->data.blksz);
 		if (!req_pending) {
 			mmc_blk_rw_try_restart(mq_rq);
 			return false;
+		} else {
+			mmc_blk_complete(mq_rq, BLK_STS_IOERR);
 		}
 		break;
 	case MMC_BLK_NOMEDIUM:
@@ -1947,10 +1973,8 @@ static bool mmc_blk_rw_done(struct mmc_async_req *areq,
 {
 	struct mmc_queue_req *mq_rq;
 	struct request *req;
-	struct mmc_blk_request *brq;
 	struct mmc_queue *mq;
 	struct mmc_blk_data *md;
-	bool req_pending;
 	int type;
 
 	/*
@@ -1962,26 +1986,13 @@ static bool mmc_blk_rw_done(struct mmc_async_req *areq,
 
 	/* The quick path if the request was successful */
 	mq_rq =	container_of(areq, struct mmc_queue_req, areq);
-	brq = &mq_rq->brq;
 	mq = mq_rq->mq;
 	md = mq->blkdata;
 	req = mmc_queue_req_to_req(mq_rq);
 	type = rq_data_dir(req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE;
 
 	mmc_blk_reset_success(md, type);
-	req_pending = blk_end_request(req, BLK_STS_OK,
-				      brq->data.bytes_xfered);
-	/*
-	 * If the blk_end_request function returns non-zero even
-	 * though all data has been transferred and no errors
-	 * were returned by the host controller, it's a bug.
-	 */
-	if (req_pending) {
-		pr_err("%s BUG rq_tot %d d_xfer %d\n",
-		       __func__, blk_rq_bytes(req),
-		       brq->data.bytes_xfered);
-		mmc_blk_rw_cmd_abort(mq_rq);
-	}
+	mmc_blk_complete(mq_rq, BLK_STS_OK);
 	return true;
 }
 
@@ -1997,7 +2008,12 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue_req *mq_rq)
 	 */
 	if (mmc_card_removed(card)) {
 		req->rq_flags |= RQF_QUIET;
-		blk_end_request_all(req, BLK_STS_IOERR);
+		/*
+		 * NOTE: this used to call blk_end_request_all()
+		 * to flush out all queued transactions to the now
+		 * non-present card.
+		 */
+		mmc_blk_complete(mq_rq, BLK_STS_IOERR);
 		return;
 	}
 
@@ -2024,8 +2040,9 @@ void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq)
 {
 	int ret;
 	struct request *req = mmc_queue_req_to_req(mq_rq);
-	struct mmc_blk_data *md = mq_rq->mq->blkdata;
-	struct mmc_card *card = md->queue.card;
+	struct mmc_queue *mq = mq_rq->mq;
+	struct mmc_blk_data *md = mq->blkdata;
+	struct mmc_card *card = mq->card;
 
 	if (!req) {
 		pr_err("%s: tried to issue NULL request\n", __func__);
@@ -2034,7 +2051,7 @@ void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq)
 
 	ret = mmc_blk_part_switch(card, md->part_type);
 	if (ret) {
-		blk_end_request_all(req, BLK_STS_IOERR);
+		mmc_blk_complete(mq_rq, BLK_STS_IOERR);
 		return;
 	}
 
@@ -2131,12 +2148,11 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
 		goto err_kfree;
 	}
 
-	spin_lock_init(&md->lock);
 	INIT_LIST_HEAD(&md->part);
 	INIT_LIST_HEAD(&md->rpmbs);
 	md->usage = 1;
 
-	ret = mmc_init_queue(&md->queue, card, &md->lock, subname);
+	ret = mmc_init_queue(&md->queue, card, subname);
 	if (ret)
 		goto err_putdisk;
 
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 5511e323db31..2301573ba2e0 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -10,6 +10,7 @@
 #include <linux/slab.h>
 #include <linux/module.h>
 #include <linux/blkdev.h>
+#include <linux/blk-mq.h>
 #include <linux/freezer.h>
 #include <linux/kthread.h>
 #include <linux/scatterlist.h>
@@ -38,74 +39,6 @@ static int mmc_prep_request(struct request_queue *q, struct request *req)
 	return BLKPREP_OK;
 }
 
-static int mmc_queue_thread(void *d)
-{
-	struct mmc_queue *mq = d;
-	struct request_queue *q = mq->queue;
-	bool claimed_card = false;
-
-	current->flags |= PF_MEMALLOC;
-
-	down(&mq->thread_sem);
-	do {
-		struct request *req;
-
-		spin_lock_irq(q->queue_lock);
-		set_current_state(TASK_INTERRUPTIBLE);
-		req = blk_fetch_request(q);
-		mq->asleep = false;
-		spin_unlock_irq(q->queue_lock);
-
-		if (req) {
-			if (!claimed_card) {
-				mmc_get_card(mq->card, NULL);
-				claimed_card = true;
-			}
-			set_current_state(TASK_RUNNING);
-			mmc_blk_issue_rq(req_to_mmc_queue_req(req));
-			cond_resched();
-		} else {
-			mq->asleep = true;
-			if (kthread_should_stop()) {
-				set_current_state(TASK_RUNNING);
-				break;
-			}
-			up(&mq->thread_sem);
-			schedule();
-			down(&mq->thread_sem);
-		}
-	} while (1);
-	up(&mq->thread_sem);
-
-	if (claimed_card)
-		mmc_put_card(mq->card, NULL);
-
-	return 0;
-}
-
-/*
- * Generic MMC request handler.  This is called for any queue on a
- * particular host.  When the host is not busy, we look for a request
- * on any queue on this host, and attempt to issue it.  This may
- * not be the queue we were asked to process.
- */
-static void mmc_request_fn(struct request_queue *q)
-{
-	struct mmc_queue *mq = q->queuedata;
-	struct request *req;
-
-	if (!mq) {
-		while ((req = blk_fetch_request(q)) != NULL) {
-			req->rq_flags |= RQF_QUIET;
-			__blk_end_request_all(req, BLK_STS_IOERR);
-		}
-		return;
-	}
-
-	if (mq->asleep)
-		wake_up_process(mq->thread);
-}
-
 static struct scatterlist *mmc_alloc_sg(int sg_len, gfp_t gfp)
 {
 	struct scatterlist *sg;
@@ -136,127 +69,158 @@ static void mmc_queue_setup_discard(struct request_queue *q,
 		queue_flag_set_unlocked(QUEUE_FLAG_SECERASE, q);
 }
 
+static blk_status_t mmc_queue_request(struct blk_mq_hw_ctx *hctx,
+				      const struct blk_mq_queue_data *bd)
+{
+	struct mmc_queue_req *mq_rq = blk_mq_rq_to_pdu(bd->rq);
+	struct mmc_queue *mq = mq_rq->mq;
+
+	/* Claim card for block queue context */
+	mmc_get_card(mq->card, &mq->blkctx);
+	mmc_blk_issue_rq(mq_rq);
+
+	return BLK_STS_OK;
+}
+
+static void mmc_complete_request(struct request *req)
+{
+	struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req);
+	struct mmc_queue *mq = mq_rq->mq;
+
+	/* Release card for block queue context */
+	mmc_put_card(mq->card, &mq->blkctx);
+}
+
 /**
  * mmc_init_request() - initialize the MMC-specific per-request data
- * @q: the request queue
+ * @set: tag set for the request
  * @req: the request
- * @gfp: memory allocation policy
+ * @hctx_idx: hardware context index
+ * @numa_node: NUMA node
  */
-static int mmc_init_request(struct request_queue *q, struct request *req,
-			    gfp_t gfp)
+static int mmc_init_request(struct blk_mq_tag_set *set, struct request *req,
+			    unsigned int hctx_idx, unsigned int numa_node)
 {
 	struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req);
-	struct mmc_queue *mq = q->queuedata;
+	struct mmc_queue *mq = set->driver_data;
 	struct mmc_card *card = mq->card;
 	struct mmc_host *host = card->host;
 
-	mq_rq->sg = mmc_alloc_sg(host->max_segs, gfp);
+	mq_rq->sg = mmc_alloc_sg(host->max_segs, GFP_KERNEL);
 	if (!mq_rq->sg)
 		return -ENOMEM;
 	mq_rq->mq = mq;
+	INIT_WORK(&mq_rq->areq.finalization_work, mmc_finalize_areq);
 
 	return 0;
 }
 
-static void mmc_exit_request(struct request_queue *q, struct request *req)
+/**
+ * mmc_exit_request() - tear down the MMC-specific per-request data
+ * @set: tag set for the request
+ * @req: the request
+ * @hctx_idx: hardware context index
+ */
+static void mmc_exit_request(struct blk_mq_tag_set *set, struct request *req,
+			     unsigned int hctx_idx)
 {
 	struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req);
 
+	flush_work(&mq_rq->areq.finalization_work);
 	kfree(mq_rq->sg);
 	mq_rq->sg = NULL;
 	mq_rq->mq = NULL;
 }
 
-static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
+static void mmc_setup_queue(struct mmc_queue *mq)
 {
+	struct request_queue *q = mq->queue;
+	struct mmc_card *card = mq->card;
 	struct mmc_host *host = card->host;
 	u64 limit = BLK_BOUNCE_HIGH;
 
 	if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
 		limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
 
-	queue_flag_set_unlocked(QUEUE_FLAG_NONROT, mq->queue);
-	queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, mq->queue);
+	blk_queue_max_segments(q, host->max_segs);
+	blk_queue_prep_rq(q, mmc_prep_request);
+	queue_flag_set_unlocked(QUEUE_FLAG_NONROT, q);
+	queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, q);
 	if (mmc_can_erase(card))
-		mmc_queue_setup_discard(mq->queue, card);
-
-	blk_queue_bounce_limit(mq->queue, limit);
-	blk_queue_max_hw_sectors(mq->queue,
+		mmc_queue_setup_discard(q, card);
+	blk_queue_bounce_limit(q, limit);
+	blk_queue_max_hw_sectors(q,
 		min(host->max_blk_count, host->max_req_size / 512));
-	blk_queue_max_segments(mq->queue, host->max_segs);
-	blk_queue_max_segment_size(mq->queue, host->max_seg_size);
-
-	/* Initialize thread_sem even if it is not used */
-	sema_init(&mq->thread_sem, 1);
+	blk_queue_max_segments(q, host->max_segs);
+	blk_queue_max_segment_size(q, host->max_seg_size);
 }
 
+static const struct blk_mq_ops mmc_mq_ops = {
+	.queue_rq       = mmc_queue_request,
+	.init_request   = mmc_init_request,
+	.exit_request   = mmc_exit_request,
+	.complete	= mmc_complete_request,
+};
+
 /**
  * mmc_init_queue - initialise a queue structure.
  * @mq: mmc queue
  * @card: mmc card to attach this queue
- * @lock: queue lock
  * @subname: partition subname
  *
  * Initialise a MMC card request queue.
  */
 int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
-		   spinlock_t *lock, const char *subname)
+		   const char *subname)
 {
 	struct mmc_host *host = card->host;
-	int ret = -ENOMEM;
+	int ret;
 
 	mq->card = card;
-	mq->queue = blk_alloc_queue(GFP_KERNEL);
-	if (!mq->queue)
-		return -ENOMEM;
-	mq->queue->queue_lock = lock;
-	mq->queue->request_fn = mmc_request_fn;
-	mq->queue->init_rq_fn = mmc_init_request;
-	mq->queue->exit_rq_fn = mmc_exit_request;
-	mq->queue->cmd_size = sizeof(struct mmc_queue_req);
-	mq->queue->queuedata = mq;
-	ret = blk_init_allocated_queue(mq->queue);
+	mq->tag_set.ops = &mmc_mq_ops;
+	/* The MMC/SD protocols have only one command pipe */
+	mq->tag_set.nr_hw_queues = 1;
+	/* Set this to 2 to simulate async requests, should we use 3? */
+	mq->tag_set.queue_depth = 2;
+	mq->tag_set.cmd_size = sizeof(struct mmc_queue_req);
+	mq->tag_set.numa_node = NUMA_NO_NODE;
+	/* We use blocking requests */
+	mq->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_BLOCKING;
+	/* Should we use BLK_MQ_F_SG_MERGE? */
+	mq->tag_set.driver_data = mq;
+
+	ret = blk_mq_alloc_tag_set(&mq->tag_set);
 	if (ret) {
-		blk_cleanup_queue(mq->queue);
+		dev_err(host->parent, "failed to allocate MQ tag set\n");
 		return ret;
 	}
-
-	blk_queue_prep_rq(mq->queue, mmc_prep_request);
-
-	mmc_setup_queue(mq, card);
-
-	mq->thread = kthread_run(mmc_queue_thread, mq, "mmcqd/%d%s",
-		host->index, subname ? subname : "");
-
-	if (IS_ERR(mq->thread)) {
-		ret = PTR_ERR(mq->thread);
-		goto cleanup_queue;
+	mq->queue = blk_mq_init_queue(&mq->tag_set);
+	if (!mq->queue) {
+		dev_err(host->parent, "failed to initialize block MQ\n");
+		goto cleanup_free_tag_set;
 	}
+	mq->queue->queuedata = mq;
+	mmc_setup_queue(mq);
 
 	return 0;
 
-cleanup_queue:
-	blk_cleanup_queue(mq->queue);
+cleanup_free_tag_set:
+	blk_mq_free_tag_set(&mq->tag_set);
 	return ret;
 }
 
 void mmc_cleanup_queue(struct mmc_queue *mq)
 {
 	struct request_queue *q = mq->queue;
-	unsigned long flags;
 
 	/* Make sure the queue isn't suspended, as that will deadlock */
 	mmc_queue_resume(mq);
 
-	/* Then terminate our worker thread */
-	kthread_stop(mq->thread);
-
 	/* Empty the queue */
-	spin_lock_irqsave(q->queue_lock, flags);
 	q->queuedata = NULL;
 	blk_start_queue(q);
-	spin_unlock_irqrestore(q->queue_lock, flags);
-
+	blk_cleanup_queue(q);
+	blk_mq_free_tag_set(&mq->tag_set);
 	mq->card = NULL;
 }
 EXPORT_SYMBOL(mmc_cleanup_queue);
@@ -265,23 +229,26 @@ EXPORT_SYMBOL(mmc_cleanup_queue);
  * mmc_queue_suspend - suspend a MMC request queue
  * @mq: MMC queue to suspend
  *
- * Stop the block request queue, and wait for our thread to
- * complete any outstanding requests.  This ensures that we
+ * Stop the block request queue. This ensures that we
  * won't suspend while a request is being processed.
  */
 void mmc_queue_suspend(struct mmc_queue *mq)
 {
 	struct request_queue *q = mq->queue;
-	unsigned long flags;
 
 	if (!mq->suspended) {
-		mq->suspended |= true;
-
-		spin_lock_irqsave(q->queue_lock, flags);
-		blk_stop_queue(q);
-		spin_unlock_irqrestore(q->queue_lock, flags);
-
-		down(&mq->thread_sem);
+		mq->suspended = true;
+		blk_mq_quiesce_queue(q);
+		/*
+		 * Currently the block layer will just block
+		 * new request from entering the queue after
+		 * this call, so we need some way of making
+		 * sure all outstanding requests are completed
+		 * before suspending. This is one way, maybe
+		 * not so elegant.
+		 */
+		mmc_get_card(mq->card, NULL);
+		mmc_put_card(mq->card, NULL);
 	}
 }
 
@@ -292,16 +259,10 @@ void mmc_queue_suspend(struct mmc_queue *mq)
 void mmc_queue_resume(struct mmc_queue *mq)
 {
 	struct request_queue *q = mq->queue;
-	unsigned long flags;
 
 	if (mq->suspended) {
 		mq->suspended = false;
-
-		up(&mq->thread_sem);
-
-		spin_lock_irqsave(q->queue_lock, flags);
-		blk_start_queue(q);
-		spin_unlock_irqrestore(q->queue_lock, flags);
+		blk_mq_unquiesce_queue(q);
 	}
 }
 
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index 67ae311b107f..c78fbb226a90 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -61,16 +61,14 @@ struct mmc_queue_req {
 
 struct mmc_queue {
 	struct mmc_card		*card;
-	struct task_struct	*thread;
-	struct semaphore	thread_sem;
 	bool			suspended;
-	bool			asleep;
 	struct mmc_blk_data	*blkdata;
 	struct request_queue	*queue;
+	struct mmc_ctx		blkctx;
+	struct blk_mq_tag_set	tag_set;
 };
 
-extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
-			  const char *);
+extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, const char *);
 extern void mmc_cleanup_queue(struct mmc_queue *);
 extern void mmc_queue_suspend(struct mmc_queue *);
 extern void mmc_queue_resume(struct mmc_queue *);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 00/12 v5] Multiqueue for MMC/SD
  2017-11-10 10:01 ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
                     ` (11 preceding siblings ...)
  2017-11-10 10:01   ` [PATCH 12/12 v5] mmc: switch MMC/SD to use blk-mq multiqueueing v5 Linus Walleij
@ 2017-11-10 13:39   ` Linus Walleij
  2017-11-10 15:24   ` Ulf Hansson
  2017-11-14 12:17   ` Bartlomiej Zolnierkiewicz
  14 siblings, 0 replies; 22+ messages in thread
From: Linus Walleij @ 2017-11-10 13:39 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Avri Altman,
	Adrian Hunter, Linus Walleij

On Fri, Nov 10, 2017 at 11:01 AM, Linus Walleij
<linus.walleij@linaro.org> wrote:

> Removing a card during I/O does not work well however :/
> So I guess I would need to work on that if this series should
> continue. (Hopefully unlikely.)

I tested a bit more and it turns out this doesn't work on any of
the MQ patch sets.

Which matches Christoph's statement that this is not really
working. I haven't really analyzed why though, I can see that
the kernel crashes firmly into a brick wall, but on mainline
it does not.

I think it's just something we have to smoke out in the next
release cycle as we switch to MQ.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 00/12 v5] Multiqueue for MMC/SD
  2017-11-10 10:01 ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
                     ` (12 preceding siblings ...)
  2017-11-10 13:39   ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
@ 2017-11-10 15:24   ` Ulf Hansson
  2017-11-14 21:17     ` Linus Walleij
  2017-11-14 12:17   ` Bartlomiej Zolnierkiewicz
  14 siblings, 1 reply; 22+ messages in thread
From: Ulf Hansson @ 2017-11-10 15:24 UTC (permalink / raw)
  To: Linus Walleij, Adrian Hunter
  Cc: linux-mmc, linux-block, Jens Axboe, Christoph Hellwig,
	Arnd Bergmann, Bartlomiej Zolnierkiewicz, Paolo Valente,
	Avri Altman

On 10 November 2017 at 11:01, Linus Walleij <linus.walleij@linaro.org> wrote:
> This is the fifth iteration of this patch set.
>
> I *HOPE* that we can scrap this patch set and merge Adrian's
> patches instead, because they also bring CQE support which is
> nice. I had some review comments on his series, mainly that
> it needs to kill off the legacy block layer code path that
> noone likes anyway.
>
> So this is mainly an academic and inspirational exercise.
> Whatever remains of this refactoring, if anything, I can
> certainly do on top of Adrian's patches as well.
>
> What changed since v4 is the error path, since Adrian pointed
> out that the error handling seems to be fragile. It was indeed
> fragile... To make sure things work properly I have run long
> test rounds with fault injection, essentially:

Please correct me if I am wrong, the issues was observed already in
patch 11, before the actual switch to mq was done, right?

>
> Enable FAULT_INJECTION, FAULT_INJECTION_DEBUG_FS,
>        FAIL_MMC_REQUEST
> cd /debug/mmc3/fail_mmc_request/
> echo 1 > probability
> echo -1 > times
>
> Then running a dd to the card, also increased the error rate
> to 10% and completed tests successfully, but at this error
> rate the MMC stack sometimes exceeds the retry limit and the
> dd command fails (as is appropriate).

That's great. I really appreciate that you have run these tests, that
gives me a good confidence from an overall point of view.

>
> Removing a card during I/O does not work well however :/
> So I guess I would need to work on that if this series should
> continue. (Hopefully unlikely.)

Yeah, this has actually been rather cumbersome to deal with also in
the legacy request path. Let's dive into this in more detail as soon
as possible.

>
>
> Linus Walleij (12):
>   mmc: core: move the asynchronous post-processing
>   mmc: core: add a workqueue for completing requests
>   mmc: core: replace waitqueue with worker
>   mmc: core: do away with is_done_rcv
>   mmc: core: do away with is_new_req
>   mmc: core: kill off the context info
>   mmc: queue: simplify queue logic
>   mmc: block: shuffle retry and error handling
>   mmc: queue: stop flushing the pipeline with NULL
>   mmc: queue/block: pass around struct mmc_queue_req*s
>   mmc: block: issue requests in massive parallel
>   mmc: switch MMC/SD to use blk-mq multiqueueing v5
>
>  drivers/mmc/core/block.c    | 557 +++++++++++++++++++++++---------------------
>  drivers/mmc/core/block.h    |   5 +-
>  drivers/mmc/core/bus.c      |   1 -
>  drivers/mmc/core/core.c     | 217 ++++++++++-------
>  drivers/mmc/core/core.h     |  11 +-
>  drivers/mmc/core/host.c     |   1 -
>  drivers/mmc/core/mmc_test.c |  31 +--
>  drivers/mmc/core/queue.c    | 252 ++++++++------------
>  drivers/mmc/core/queue.h    |  16 +-
>  include/linux/mmc/core.h    |   3 +-
>  include/linux/mmc/host.h    |  31 +--
>  11 files changed, 557 insertions(+), 568 deletions(-)

First, I haven't yet commented on the latest version of the mq patch
(patch12 in this series) and neither on Adrians (patch 3 in his
series), but before doing that, let me share my overall view of how we
could move forward, as to see if all of us can agree on that path.

So, what I really like in $subject series is the step by step method,
moving slowly forward enables an easy review, then the actual switch
to mq gets a diff of "3 files changed, 156 insertions(+), 181
deletions(-)". This shows to me, that it can be done! Great work!
Of course, I do realize that you may not have considered all
preparations needed for CQE, which Adrian may have thought of in his
mq patch from his series (patch 3), but still.

Moreover, for reasons brought up while reviewing Adrian's series,
regarding if mq is "ready", and because I see that the diff for patch
12 is small, I suggest that we just skip the step adding a Kconfig
option to allow an opt-in of the mq path. In other words, *the* patch
that makes the switch to mq, should also remove the entire left over
of rubbish code, from the legacy request path. That's also what you do
in patch12, nice!

Finally, I understand that you would be happy to scrap this series,
but instead let Adrian's series, when re-posted, to go first. Could
you perhaps re-consider that, because I wonder if it may not be
smother and less risky, to actually apply everything up to patch 11 in
this series?

I noticed that you reported issues with card removal during I/O (for
both yours and Adrian's mq patch), but does those problems exists at
patch 11 - or is those explicitly introduced with the mk patch (patch
12)?

Of course, I realize that if we apply everything up to patch11, that
would require a massive re-base of Adrian's mq/CQE series, but on the
other hand, then matter which mq patch we decide to go with, it should
be a rather small diff, thus easy to review and less risky.

Adrian, Linus - what do you think?

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 00/12 v5] Multiqueue for MMC/SD
  2017-11-10 10:01 ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
                     ` (13 preceding siblings ...)
  2017-11-10 15:24   ` Ulf Hansson
@ 2017-11-14 12:17   ` Bartlomiej Zolnierkiewicz
  2017-11-14 13:30     ` Bartlomiej Zolnierkiewicz
  14 siblings, 1 reply; 22+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2017-11-14 12:17 UTC (permalink / raw)
  To: Linus Walleij
  Cc: linux-mmc, Ulf Hansson, linux-block, Jens Axboe,
	Christoph Hellwig, Arnd Bergmann, Paolo Valente, Avri Altman,
	Adrian Hunter

[-- Attachment #1: Type: text/plain, Size: 3059 bytes --]


Hi Linus,

On Friday, November 10, 2017 11:01:31 AM Linus Walleij wrote:
> This is the fifth iteration of this patch set.
> 
> I *HOPE* that we can scrap this patch set and merge Adrian's
> patches instead, because they also bring CQE support which is
> nice. I had some review comments on his series, mainly that
> it needs to kill off the legacy block layer code path that
> noone likes anyway.
> 
> So this is mainly an academic and inspirational exercise.
> Whatever remains of this refactoring, if anything, I can
> certainly do on top of Adrian's patches as well.
> 
> What changed since v4 is the error path, since Adrian pointed
> out that the error handling seems to be fragile. It was indeed
> fragile... To make sure things work properly I have run long
> test rounds with fault injection, essentially:
> 
> Enable FAULT_INJECTION, FAULT_INJECTION_DEBUG_FS,
>        FAIL_MMC_REQUEST
> cd /debug/mmc3/fail_mmc_request/
> echo 1 > probability
> echo -1 > times
> 
> Then running a dd to the card, also increased the error rate
> to 10% and completed tests successfully, but at this error
> rate the MMC stack sometimes exceeds the retry limit and the
> dd command fails (as is appropriate).
> 
> Removing a card during I/O does not work well however :/
> So I guess I would need to work on that if this series should
> continue. (Hopefully unlikely.)
> 
> 
> Linus Walleij (12):
>   mmc: core: move the asynchronous post-processing
>   mmc: core: add a workqueue for completing requests
>   mmc: core: replace waitqueue with worker
>   mmc: core: do away with is_done_rcv
>   mmc: core: do away with is_new_req
>   mmc: core: kill off the context info
>   mmc: queue: simplify queue logic
>   mmc: block: shuffle retry and error handling
>   mmc: queue: stop flushing the pipeline with NULL
>   mmc: queue/block: pass around struct mmc_queue_req*s
>   mmc: block: issue requests in massive parallel
>   mmc: switch MMC/SD to use blk-mq multiqueueing v5
> 
>  drivers/mmc/core/block.c    | 557 +++++++++++++++++++++++---------------------
>  drivers/mmc/core/block.h    |   5 +-
>  drivers/mmc/core/bus.c      |   1 -
>  drivers/mmc/core/core.c     | 217 ++++++++++-------
>  drivers/mmc/core/core.h     |  11 +-
>  drivers/mmc/core/host.c     |   1 -
>  drivers/mmc/core/mmc_test.c |  31 +--
>  drivers/mmc/core/queue.c    | 252 ++++++++------------
>  drivers/mmc/core/queue.h    |  16 +-
>  include/linux/mmc/core.h    |   3 +-
>  include/linux/mmc/host.h    |  31 +--
>  11 files changed, 557 insertions(+), 568 deletions(-)

This works much better than initial version and a simple dd read
test shows more consistent results than with vanilla kernel.

However there are still some issues:

1. 30 seconds delay on "Waiting for /dev to be fully populated..."
   during boot

2. reboot command no longer works (there is a livelock after
   "The system is going down for reboot NOW!" message)

Full log (together with SysRq-l & SysRq-t outputs) attached.

Best regards,
--
Bartlomiej Zolnierkiewicz
Samsung R&D Institute Poland
Samsung Electronics

[-- Attachment #2: log.txt.gz --]
[-- Type: application/gzip, Size: 53535 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 00/12 v5] Multiqueue for MMC/SD
  2017-11-14 12:17   ` Bartlomiej Zolnierkiewicz
@ 2017-11-14 13:30     ` Bartlomiej Zolnierkiewicz
  2017-11-14 21:19       ` Linus Walleij
  0 siblings, 1 reply; 22+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2017-11-14 13:30 UTC (permalink / raw)
  To: Linus Walleij
  Cc: linux-mmc, Ulf Hansson, linux-block, Jens Axboe,
	Christoph Hellwig, Arnd Bergmann, Paolo Valente, Avri Altman,
	Adrian Hunter

On Tuesday, November 14, 2017 01:17:34 PM Bartlomiej Zolnierkiewicz wrote:

> This works much better than initial version and a simple dd read
> test shows more consistent results than with vanilla kernel.
> 
> However there are still some issues:
> 
> 1. 30 seconds delay on "Waiting for /dev to be fully populated..."
>    during boot
> 
> 2. reboot command no longer works (there is a livelock after
>    "The system is going down for reboot NOW!" message)
> 
> Full log (together with SysRq-l & SysRq-t outputs) attached.

BTW: these problems are not present in Adrian's V13 patchset
(with mmc-mq enabled by default).

Best regards,
--
Bartlomiej Zolnierkiewicz
Samsung R&D Institute Poland
Samsung Electronics

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 00/12 v5] Multiqueue for MMC/SD
  2017-11-10 15:24   ` Ulf Hansson
@ 2017-11-14 21:17     ` Linus Walleij
  2017-11-15 10:24       ` Ulf Hansson
  2017-11-15 13:50       ` Adrian Hunter
  0 siblings, 2 replies; 22+ messages in thread
From: Linus Walleij @ 2017-11-14 21:17 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: Adrian Hunter, linux-mmc, linux-block, Jens Axboe,
	Christoph Hellwig, Arnd Bergmann, Bartlomiej Zolnierkiewicz,
	Paolo Valente, Avri Altman

On Fri, Nov 10, 2017 at 4:24 PM, Ulf Hansson <ulf.hansson@linaro.org> wrote:
> On 10 November 2017 at 11:01, Linus Walleij <linus.walleij@linaro.org> wrote:
>> This is the fifth iteration of this patch set.
>>
>> I *HOPE* that we can scrap this patch set and merge Adrian's
>> patches instead, because they also bring CQE support which is
>> nice. I had some review comments on his series, mainly that
>> it needs to kill off the legacy block layer code path that
>> noone likes anyway.
>>
>> So this is mainly an academic and inspirational exercise.
>> Whatever remains of this refactoring, if anything, I can
>> certainly do on top of Adrian's patches as well.
>>
>> What changed since v4 is the error path, since Adrian pointed
>> out that the error handling seems to be fragile. It was indeed
>> fragile... To make sure things work properly I have run long
>> test rounds with fault injection, essentially:
>
> Please correct me if I am wrong, the issues was observed already in
> patch 11, before the actual switch to mq was done, right?

Yes. That's where I fixed it up mostly.

> Moreover, for reasons brought up while reviewing Adrian's series,
> regarding if mq is "ready", and because I see that the diff for patch
> 12 is small, I suggest that we just skip the step adding a Kconfig
> option to allow an opt-in of the mq path. In other words, *the* patch
> that makes the switch to mq, should also remove the entire left over
> of rubbish code, from the legacy request path. That's also what you do
> in patch12, nice!

Partly true.

Adrian also pointed out the rubbishness of the error handling code
in the old stack, and my patch set does *not* fix that. It is also a part
of his patch set I like very much and a reason why I would prefer to
use Adrian's patches if possible.

We have the following risk factors:

- Observed performance degradation of 1% (on x86 SDHI I guess)
- The kernel crashes if SD card is removed (both patch sets)
- The risk of something nasty happening we don't know of

> Finally, I understand that you would be happy to scrap this series,
> but instead let Adrian's series, when re-posted, to go first. Could
> you perhaps re-consider that, because I wonder if it may not be
> smother and less risky, to actually apply everything up to patch 11 in
> this series?

This is possible.

But I think it is preferred to proceed with Adrian's patches.
I really like the looks of the code. He says he's coming back with
a set that also kills off the old block layer, and I am pretty
positive I will just ACK the whole thing.

I optimistically think we can jointly fix the card removal issue
and possible also mitigate or root-cause the performance
degradation observed by Adrian

In the best of worlds, Ming Lei's patches will just fix this too
(we'll see, we can probably get a branch from the block people
to try it) else we can use tracing and perf to drill into it I guess.

> I noticed that you reported issues with card removal during I/O (for
> both yours and Adrian's mq patch), but does those problems exists at
> patch 11 - or is those explicitly introduced with the mk patch (patch
> 12)?

I tested it and it is present earlier in the series. I would have to
revisit and hash it out.

> Of course, I realize that if we apply everything up to patch11, that
> would require a massive re-base of Adrian's mq/CQE series, but on the
> other hand, then matter which mq patch we decide to go with, it should
> be a rather small diff, thus easy to review and less risky.

At this point I would prefer to use Adrian's series. He has explained
pretty well his reasoning and when I tested the code it was performing
well. I have some outstanding thingies, but this I can just as well do
on top of his patches.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 00/12 v5] Multiqueue for MMC/SD
  2017-11-14 13:30     ` Bartlomiej Zolnierkiewicz
@ 2017-11-14 21:19       ` Linus Walleij
  0 siblings, 0 replies; 22+ messages in thread
From: Linus Walleij @ 2017-11-14 21:19 UTC (permalink / raw)
  To: Bartlomiej Zolnierkiewicz
  Cc: linux-mmc, Ulf Hansson, linux-block, Jens Axboe,
	Christoph Hellwig, Arnd Bergmann, Paolo Valente, Avri Altman,
	Adrian Hunter

On Tue, Nov 14, 2017 at 2:30 PM, Bartlomiej Zolnierkiewicz
<b.zolnierkie@samsung.com> wrote:
> On Tuesday, November 14, 2017 01:17:34 PM Bartlomiej Zolnierkiewicz wrote:
>
>> This works much better than initial version and a simple dd read
>> test shows more consistent results than with vanilla kernel.
>>
>> However there are still some issues:
>>
>> 1. 30 seconds delay on "Waiting for /dev to be fully populated..."
>>    during boot
>>
>> 2. reboot command no longer works (there is a livelock after
>>    "The system is going down for reboot NOW!" message)
>>
>> Full log (together with SysRq-l & SysRq-t outputs) attached.
>
> BTW: these problems are not present in Adrian's V13 patchset
> (with mmc-mq enabled by default).

Yes, as I even say in the cover letter I think his patches are
better so we should use those.

Bart can you provide a Tested-by for Adrian' patch set?

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 00/12 v5] Multiqueue for MMC/SD
  2017-11-14 21:17     ` Linus Walleij
@ 2017-11-15 10:24       ` Ulf Hansson
  2017-11-15 13:50       ` Adrian Hunter
  1 sibling, 0 replies; 22+ messages in thread
From: Ulf Hansson @ 2017-11-15 10:24 UTC (permalink / raw)
  To: Linus Walleij
  Cc: Adrian Hunter, linux-mmc, linux-block, Jens Axboe,
	Christoph Hellwig, Arnd Bergmann, Bartlomiej Zolnierkiewicz,
	Paolo Valente, Avri Altman

[...]

>> Moreover, for reasons brought up while reviewing Adrian's series,
>> regarding if mq is "ready", and because I see that the diff for patch
>> 12 is small, I suggest that we just skip the step adding a Kconfig
>> option to allow an opt-in of the mq path. In other words, *the* patch
>> that makes the switch to mq, should also remove the entire left over
>> of rubbish code, from the legacy request path. That's also what you do
>> in patch12, nice!
>
> Partly true.
>
> Adrian also pointed out the rubbishness of the error handling code
> in the old stack, and my patch set does *not* fix that. It is also a part
> of his patch set I like very much and a reason why I would prefer to
> use Adrian's patches if possible.
>
> We have the following risk factors:
>
> - Observed performance degradation of 1% (on x86 SDHI I guess)

I don't think that small degradation is a reason for not enabling mq,
although for sure we should continue to investigate why it is.

> - The kernel crashes if SD card is removed (both patch sets)

Yep, this needs to be fixed.

> - The risk of something nasty happening we don't know of

:-)

>
>> Finally, I understand that you would be happy to scrap this series,
>> but instead let Adrian's series, when re-posted, to go first. Could
>> you perhaps re-consider that, because I wonder if it may not be
>> smother and less risky, to actually apply everything up to patch 11 in
>> this series?
>
> This is possible.
>
> But I think it is preferred to proceed with Adrian's patches.
> I really like the looks of the code. He says he's coming back with
> a set that also kills off the old block layer, and I am pretty
> positive I will just ACK the whole thing.

Alright, let's go with this option!

>
> I optimistically think we can jointly fix the card removal issue
> and possible also mitigate or root-cause the performance
> degradation observed by Adrian
>
> In the best of worlds, Ming Lei's patches will just fix this too
> (we'll see, we can probably get a branch from the block people
> to try it) else we can use tracing and perf to drill into it I guess.

I think most of Ming's patches addressing performance issues should be
in Linus' master already, so once I have a my next branch based on
4.15rc1, we should be able to run a new round of test.

Anyway, if anything else is needed from the generic block layer, sure
I am open to pull in a branch if needed.

>
>> I noticed that you reported issues with card removal during I/O (for
>> both yours and Adrian's mq patch), but does those problems exists at
>> patch 11 - or is those explicitly introduced with the mk patch (patch
>> 12)?
>
> I tested it and it is present earlier in the series. I would have to
> revisit and hash it out.

Right. So, let's then forget about my suggested approach and spend
time more wisely on Adrian's series.

>
>> Of course, I realize that if we apply everything up to patch11, that
>> would require a massive re-base of Adrian's mq/CQE series, but on the
>> other hand, then matter which mq patch we decide to go with, it should
>> be a rather small diff, thus easy to review and less risky.
>
> At this point I would prefer to use Adrian's series. He has explained
> pretty well his reasoning and when I tested the code it was performing
> well. I have some outstanding thingies, but this I can just as well do
> on top of his patches.

Yep. As stated above, let's go for that solution.

I will then be awaiting a new version from Adrian, hopefully I can
apply his series already when rc1 is out, to make sure we get enough
time to smoke out any remaining problems.

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 00/12 v5] Multiqueue for MMC/SD
  2017-11-14 21:17     ` Linus Walleij
  2017-11-15 10:24       ` Ulf Hansson
@ 2017-11-15 13:50       ` Adrian Hunter
  2017-11-29 13:13         ` Linus Walleij
  1 sibling, 1 reply; 22+ messages in thread
From: Adrian Hunter @ 2017-11-15 13:50 UTC (permalink / raw)
  To: Linus Walleij, Ulf Hansson
  Cc: linux-mmc, linux-block, Jens Axboe, Christoph Hellwig,
	Arnd Bergmann, Bartlomiej Zolnierkiewicz, Paolo Valente,
	Avri Altman

On 14/11/17 23:17, Linus Walleij wrote:
> We have the following risk factors:
> 
> - Observed performance degradation of 1% (on x86 SDHI I guess)
> - The kernel crashes if SD card is removed (both patch sets)

I haven't been able to reproduce that.  Do you have more information?

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 00/12 v5] Multiqueue for MMC/SD
  2017-11-15 13:50       ` Adrian Hunter
@ 2017-11-29 13:13         ` Linus Walleij
  0 siblings, 0 replies; 22+ messages in thread
From: Linus Walleij @ 2017-11-29 13:13 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, linux-block, Jens Axboe,
	Christoph Hellwig, Arnd Bergmann, Bartlomiej Zolnierkiewicz,
	Paolo Valente, Avri Altman

On Wed, Nov 15, 2017 at 2:50 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:
> On 14/11/17 23:17, Linus Walleij wrote:
>> We have the following risk factors:
>>
>> - Observed performance degradation of 1% (on x86 SDHI I guess)
>> - The kernel crashes if SD card is removed (both patch sets)
>
> I haven't been able to reproduce that.  Do you have more information?

I saw it in an earlier version of the patch set, but it might be due to
some confusion on my side.

I will try to get this series going and stress it a bit and see what happens.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2017-11-29 13:13 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CGME20171110104657epcas1p278e62237982d200175480c28080cb708@epcas1p2.samsung.com>
2017-11-10 10:01 ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
2017-11-10 10:01   ` [PATCH 01/12 v5] mmc: core: move the asynchronous post-processing Linus Walleij
2017-11-10 10:01   ` [PATCH 02/12 v5] mmc: core: add a workqueue for completing requests Linus Walleij
2017-11-10 10:01   ` [PATCH 03/12 v5] mmc: core: replace waitqueue with worker Linus Walleij
2017-11-10 10:01   ` [PATCH 04/12] mmc: core: do away with is_done_rcv Linus Walleij
2017-11-10 10:01   ` [PATCH 05/12] mmc: core: do away with is_new_req Linus Walleij
2017-11-10 10:01   ` [PATCH 06/12 v5] mmc: core: kill off the context info Linus Walleij
2017-11-10 10:01   ` [PATCH 07/12 v5] mmc: queue: simplify queue logic Linus Walleij
2017-11-10 10:01   ` [PATCH 08/12 v5] mmc: block: shuffle retry and error handling Linus Walleij
2017-11-10 10:01   ` [PATCH 09/12 v5] mmc: queue: stop flushing the pipeline with NULL Linus Walleij
2017-11-10 10:01   ` [PATCH 10/12 v5] mmc: queue/block: pass around struct mmc_queue_req*s Linus Walleij
2017-11-10 10:01   ` [PATCH 11/12 v5] mmc: block: issue requests in massive parallel Linus Walleij
2017-11-10 10:01   ` [PATCH 12/12 v5] mmc: switch MMC/SD to use blk-mq multiqueueing v5 Linus Walleij
2017-11-10 13:39   ` [PATCH 00/12 v5] Multiqueue for MMC/SD Linus Walleij
2017-11-10 15:24   ` Ulf Hansson
2017-11-14 21:17     ` Linus Walleij
2017-11-15 10:24       ` Ulf Hansson
2017-11-15 13:50       ` Adrian Hunter
2017-11-29 13:13         ` Linus Walleij
2017-11-14 12:17   ` Bartlomiej Zolnierkiewicz
2017-11-14 13:30     ` Bartlomiej Zolnierkiewicz
2017-11-14 21:19       ` Linus Walleij

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.