All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/6 v2] mmc: core: Delete bounce buffer Kconfig option
@ 2017-05-18  9:29 Linus Walleij
  2017-05-18  9:29 ` [PATCH 2/6 v2] mmc: core: Allocate per-request data using the block layer core Linus Walleij
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Linus Walleij @ 2017-05-18  9:29 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson, Adrian Hunter
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Linus Walleij

This option is activated by all multiplatform configs and what
not so we almost always have it turned on, and the memory it
saves is negligible, even more so moving forward. The actual
bounce buffer only gets allocated only when used, the only
thing the ifdefs are saving is a little bit of code.

It is highly improper to have this as a Kconfig option that
get turned on by Kconfig, make this a pure runtime-thing and
let the host decide whether we use bounce buffers. We add a
new property "disable_bounce" to the host struct.

Notice that mmc_queue_calc_bouncesz() already disables the
bounce buffers if host->max_segs != 1, so any arch that has a
maximum number of segments higher than 1 will have bounce
buffers disabled.

The option CONFIG_MMC_BLOCK_BOUNCE is default y so the
majority of platforms in the kernel already have it on, and
it then gets turned off at runtime since most of these have
a host->max_segs > 1. The few exceptions that have
host->max_segs == 1 and still turn off the bounce buffering
are those that disable it in their defconfig.

Those are the following:

arch/arm/configs/colibri_pxa300_defconfig
arch/arm/configs/zeus_defconfig
- Uses MMC_PXA, drivers/mmc/host/pxamci.c
- Sets host->max_segs = NR_SG, which is 1
- This needs its bounce buffer deactivated so we set
  host->disable_bounce to true in the host driver

arch/arm/configs/davinci_all_defconfig
- Uses MMC_DAVINCI, drivers/mmc/host/davinci_mmc.c
- This driver sets host->max_segs to MAX_NR_SG, which is 16
- That means this driver anyways disabled bounce buffers
- No special action needed for this platform

arch/arm/configs/lpc32xx_defconfig
arch/arm/configs/nhk8815_defconfig
arch/arm/configs/u300_defconfig
- Uses MMC_ARMMMCI, drivers/mmc/host/mmci.[c|h]
- This driver by default sets host->max_segs to NR_SG,
  which is 128, unless a DMA engine is used, and in that case
  the number of segments are also > 1
- That means this driver already disables bounce buffers
- No special action needed for these platforms

arch/arm/configs/sama5_defconfig
- Uses MMC_SDHCI, MMC_SDHCI_PLTFM, MMC_SDHCI_OF_AT91, MMC_ATMELMCI
- Uses drivers/mmc/host/sdhci.c
- Normally sets host->max_segs to SDHCI_MAX_SEGS which is 128 and
  thus disables bounce buffers
- Sets host->max_segs to 1 if SDHCI_USE_SDMA is set
- SDHCI_USE_SDMA is only set by SDHCI on PCI adapers
- That means that for this platform bounce buffers are already
  disabled at runtime
- No special action needed for this platform

arch/blackfin/configs/CM-BF533_defconfig
arch/blackfin/configs/CM-BF537E_defconfig
- Uses MMC_SPI (a simple MMC card connected on SPI pins)
- Uses drivers/mmc/host/mmc_spi.c
- Sets host->max_segs to MMC_SPI_BLOCKSATONCE which is 128
- That means this platform already disables bounce buffers at
  runtime
- No special action needed for these platforms

arch/mips/configs/cavium_octeon_defconfig
- Uses MMC_CAVIUM_OCTEON, drivers/mmc/host/cavium.c
- Sets host->max_segs to 16 or 1
- Setting host->disable_bounce to be sure for the 1 case

arch/mips/configs/qi_lb60_defconfig
- Uses MMC_JZ4740, drivers/mmc/host/jz4740_mmc.c
- This sets host->max_segs to 128 so bounce buffers are
  already runtime disabled
- No action needed for this platform

It would be interesting to come up with a list of the platforms
that actually end up using bounce buffers. I have not been
able to infer such a list, but it occurs when
host->max_segs == 1 and the bounce buffering is not explicitly
disabled.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v1->v2:
- Instead of adding a new bool "disable_bounce" we use the host
  caps variable, reuse the free bit 21 to indicate that bounce
  buffers should be disabled on the host.
---
 drivers/mmc/core/Kconfig  | 18 ------------------
 drivers/mmc/core/queue.c  | 15 +--------------
 drivers/mmc/host/cavium.c |  4 +++-
 drivers/mmc/host/pxamci.c |  6 +++++-
 include/linux/mmc/host.h  |  1 +
 5 files changed, 10 insertions(+), 34 deletions(-)

diff --git a/drivers/mmc/core/Kconfig b/drivers/mmc/core/Kconfig
index fc1ecdaaa9ca..42e89060cd41 100644
--- a/drivers/mmc/core/Kconfig
+++ b/drivers/mmc/core/Kconfig
@@ -61,24 +61,6 @@ config MMC_BLOCK_MINORS
 
 	  If unsure, say 8 here.
 
-config MMC_BLOCK_BOUNCE
-	bool "Use bounce buffer for simple hosts"
-	depends on MMC_BLOCK
-	default y
-	help
-	  SD/MMC is a high latency protocol where it is crucial to
-	  send large requests in order to get high performance. Many
-	  controllers, however, are restricted to continuous memory
-	  (i.e. they can't do scatter-gather), something the kernel
-	  rarely can provide.
-
-	  Say Y here to help these restricted hosts by bouncing
-	  requests back and forth from a large buffer. You will get
-	  a big performance gain at the cost of up to 64 KiB of
-	  physical memory.
-
-	  If unsure, say Y here.
-
 config SDIO_UART
 	tristate "SDIO UART/GPS class support"
 	depends on TTY
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 5c37b6be3e7b..70ba7f94c706 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -219,7 +219,6 @@ static struct mmc_queue_req *mmc_queue_alloc_mqrqs(int qdepth)
 	return mqrq;
 }
 
-#ifdef CONFIG_MMC_BLOCK_BOUNCE
 static int mmc_queue_alloc_bounce_bufs(struct mmc_queue_req *mqrq, int qdepth,
 				       unsigned int bouncesz)
 {
@@ -258,7 +257,7 @@ static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host)
 {
 	unsigned int bouncesz = MMC_QUEUE_BOUNCESZ;
 
-	if (host->max_segs != 1)
+	if (host->max_segs != 1 || (host->caps & MMC_CAP_NO_BOUNCE_BUFF))
 		return 0;
 
 	if (bouncesz > host->max_req_size)
@@ -273,18 +272,6 @@ static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host)
 
 	return bouncesz;
 }
-#else
-static inline bool mmc_queue_alloc_bounce(struct mmc_queue_req *mqrq,
-					  int qdepth, unsigned int bouncesz)
-{
-	return false;
-}
-
-static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host)
-{
-	return 0;
-}
-#endif
 
 static int mmc_queue_alloc_sgs(struct mmc_queue_req *mqrq, int qdepth,
 			       int max_segs)
diff --git a/drivers/mmc/host/cavium.c b/drivers/mmc/host/cavium.c
index 58b51ba6aabd..9c1575f7c1fb 100644
--- a/drivers/mmc/host/cavium.c
+++ b/drivers/mmc/host/cavium.c
@@ -1040,10 +1040,12 @@ int cvm_mmc_of_slot_probe(struct device *dev, struct cvm_mmc_host *host)
 	 * We only have a 3.3v supply, we cannot support any
 	 * of the UHS modes. We do support the high speed DDR
 	 * modes up to 52MHz.
+	 *
+	 * Disable bounce buffers for max_segs = 1
 	 */
 	mmc->caps |= MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED |
 		     MMC_CAP_ERASE | MMC_CAP_CMD23 | MMC_CAP_POWER_OFF_CARD |
-		     MMC_CAP_3_3V_DDR;
+		     MMC_CAP_3_3V_DDR | MMC_CAP_NO_BOUNCE_BUFF;
 
 	if (host->use_sg)
 		mmc->max_segs = 16;
diff --git a/drivers/mmc/host/pxamci.c b/drivers/mmc/host/pxamci.c
index c763b404510f..59ab194cb009 100644
--- a/drivers/mmc/host/pxamci.c
+++ b/drivers/mmc/host/pxamci.c
@@ -702,7 +702,11 @@ static int pxamci_probe(struct platform_device *pdev)
 
 	pxamci_init_ocr(host);
 
-	mmc->caps = 0;
+	/*
+	 * This architecture used to disable bounce buffers through its
+	 * defconfig, now it is done at runtime as a host property.
+	 */
+	mmc->caps = MMC_CAP_NO_BOUNCE_BUFF;
 	host->cmdat = 0;
 	if (!cpu_is_pxa25x()) {
 		mmc->caps |= MMC_CAP_4_BIT_DATA | MMC_CAP_SDIO_IRQ;
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index 21385ac0c9b1..67f6abe5c3af 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -270,6 +270,7 @@ struct mmc_host {
 #define MMC_CAP_UHS_SDR50	(1 << 18)	/* Host supports UHS SDR50 mode */
 #define MMC_CAP_UHS_SDR104	(1 << 19)	/* Host supports UHS SDR104 mode */
 #define MMC_CAP_UHS_DDR50	(1 << 20)	/* Host supports UHS DDR50 mode */
+#define MMC_CAP_NO_BOUNCE_BUFF	(1 << 21)	/* Disable bounce buffers on host */
 #define MMC_CAP_DRIVER_TYPE_A	(1 << 23)	/* Host supports Driver Type A */
 #define MMC_CAP_DRIVER_TYPE_C	(1 << 24)	/* Host supports Driver Type C */
 #define MMC_CAP_DRIVER_TYPE_D	(1 << 25)	/* Host supports Driver Type D */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/6 v2] mmc: core: Allocate per-request data using the block layer core
  2017-05-18  9:29 [PATCH 1/6 v2] mmc: core: Delete bounce buffer Kconfig option Linus Walleij
@ 2017-05-18  9:29 ` Linus Walleij
  2017-05-18  9:32   ` Christoph Hellwig
  2017-05-18  9:29 ` [PATCH 3/6 v2] mmc: block: Tag is_rpmb as bool Linus Walleij
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Linus Walleij @ 2017-05-18  9:29 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson, Adrian Hunter
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Linus Walleij

The mmc_queue_req is a per-request state container the MMC core uses
to carry bounce buffers, pointers to asynchronous requests and so on.
Currently allocated as a static array of objects, then as a request
comes in, a mmc_queue_req is assigned to it, and used during the
lifetime of the request.

This is backwards compared to how other block layer drivers work:
they usally let the block core provide a per-request struct that get
allocated right beind the struct request, and which can be obtained
using the blk_mq_rq_to_pdu() helper. (The _mq_ infix in this function
name is misleading: it is used by both the old and the MQ block
layer.)

The per-request struct gets allocated to the size stored in the queue
variable .cmd_size initialized using the .init_rq_fn() and
cleaned up using .exit_rq_fn().

The block layer code makes the MMC core rely on this mechanism to
allocate the per-request mmc_queue_req state container.

Doing this make a lot of complicated queue handling go away. We only
need to keep the .qnct that keeps count of how many request are
currently being processed by the MMC layer. The MQ block layer will
replace also this once we transition to it.

Doing this refactoring is necessary to move the ioctl() operations
into custom block layer requests tagged with REQ_OP_DRV_[IN|OUT]
instead of the custom code using the BigMMCHostLock that we have
today: those require that per-request data be obtainable easily from
a request after creating a custom request with e.g.:

struct request *rq = blk_get_request(q, REQ_OP_DRV_IN, __GFP_RECLAIM);
struct mmc_queue_req *mq_rq = req_to_mq_rq(rq);

And this is not possible with the current construction, as the request
is not immediately assigned the per-request state container, but
instead it gets assigned when the request finally enters the MMC
queue, which is way too late for custom requests.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v1->v2:
- Rename req_to_mq_rq() to req_to_mmc_queue_req()
- Drop irrelevant FIXME comment.
---
 drivers/mmc/core/block.c |  38 ++------
 drivers/mmc/core/queue.c | 221 +++++++++++++----------------------------------
 drivers/mmc/core/queue.h |  22 ++---
 include/linux/mmc/card.h |   2 -
 4 files changed, 79 insertions(+), 204 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 8273b078686d..5f29b5625216 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -129,13 +129,6 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
 				      struct mmc_blk_data *md);
 static int get_card_status(struct mmc_card *card, u32 *status, int retries);
 
-static void mmc_blk_requeue(struct request_queue *q, struct request *req)
-{
-	spin_lock_irq(q->queue_lock);
-	blk_requeue_request(q, req);
-	spin_unlock_irq(q->queue_lock);
-}
-
 static struct mmc_blk_data *mmc_blk_get(struct gendisk *disk)
 {
 	struct mmc_blk_data *md;
@@ -1642,7 +1635,7 @@ static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card,
 	if (mmc_card_removed(card))
 		req->rq_flags |= RQF_QUIET;
 	while (blk_end_request(req, -EIO, blk_rq_cur_bytes(req)));
-	mmc_queue_req_free(mq, mqrq);
+	mq->qcnt--;
 }
 
 /**
@@ -1662,7 +1655,7 @@ static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct request *req,
 	if (mmc_card_removed(mq->card)) {
 		req->rq_flags |= RQF_QUIET;
 		blk_end_request_all(req, -EIO);
-		mmc_queue_req_free(mq, mqrq);
+		mq->qcnt--; /* FIXME: just set to 0? */
 		return;
 	}
 	/* Else proceed and try to restart the current async request */
@@ -1685,12 +1678,8 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 	bool req_pending = true;
 
 	if (new_req) {
-		mqrq_cur = mmc_queue_req_find(mq, new_req);
-		if (!mqrq_cur) {
-			WARN_ON(1);
-			mmc_blk_requeue(mq->queue, new_req);
-			new_req = NULL;
-		}
+		mqrq_cur = req_to_mmc_queue_req(new_req);
+		mq->qcnt++;
 	}
 
 	if (!mq->qcnt)
@@ -1764,12 +1753,12 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 				if (req_pending)
 					mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq);
 				else
-					mmc_queue_req_free(mq, mq_rq);
+					mq->qcnt--;
 				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 				return;
 			}
 			if (!req_pending) {
-				mmc_queue_req_free(mq, mq_rq);
+				mq->qcnt--;
 				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 				return;
 			}
@@ -1814,7 +1803,7 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 			req_pending = blk_end_request(old_req, -EIO,
 						      brq->data.blksz);
 			if (!req_pending) {
-				mmc_queue_req_free(mq, mq_rq);
+				mq->qcnt--;
 				mmc_blk_rw_try_restart(mq, new_req, mqrq_cur);
 				return;
 			}
@@ -1844,7 +1833,7 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req)
 		}
 	} while (req_pending);
 
-	mmc_queue_req_free(mq, mq_rq);
+	mq->qcnt--;
 }
 
 void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
@@ -2166,7 +2155,6 @@ static int mmc_blk_probe(struct mmc_card *card)
 {
 	struct mmc_blk_data *md, *part_md;
 	char cap_str[10];
-	int ret;
 
 	/*
 	 * Check that the card supports the command class(es) we need.
@@ -2176,15 +2164,9 @@ static int mmc_blk_probe(struct mmc_card *card)
 
 	mmc_fixup_device(card, mmc_blk_fixups);
 
-	ret = mmc_queue_alloc_shared_queue(card);
-	if (ret)
-		return ret;
-
 	md = mmc_blk_alloc(card);
-	if (IS_ERR(md)) {
-		mmc_queue_free_shared_queue(card);
+	if (IS_ERR(md))
 		return PTR_ERR(md);
-	}
 
 	string_get_size((u64)get_capacity(md->disk), 512, STRING_UNITS_2,
 			cap_str, sizeof(cap_str));
@@ -2222,7 +2204,6 @@ static int mmc_blk_probe(struct mmc_card *card)
  out:
 	mmc_blk_remove_parts(card, md);
 	mmc_blk_remove_req(md);
-	mmc_queue_free_shared_queue(card);
 	return 0;
 }
 
@@ -2240,7 +2221,6 @@ static void mmc_blk_remove(struct mmc_card *card)
 	pm_runtime_put_noidle(&card->dev);
 	mmc_blk_remove_req(md);
 	dev_set_drvdata(&card->dev, NULL);
-	mmc_queue_free_shared_queue(card);
 }
 
 static int _mmc_blk_suspend(struct mmc_card *card)
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 70ba7f94c706..c18c41289ecf 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -40,35 +40,6 @@ static int mmc_prep_request(struct request_queue *q, struct request *req)
 	return BLKPREP_OK;
 }
 
-struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *mq,
-					 struct request *req)
-{
-	struct mmc_queue_req *mqrq;
-	int i = ffz(mq->qslots);
-
-	if (i >= mq->qdepth)
-		return NULL;
-
-	mqrq = &mq->mqrq[i];
-	WARN_ON(mqrq->req || mq->qcnt >= mq->qdepth ||
-		test_bit(mqrq->task_id, &mq->qslots));
-	mqrq->req = req;
-	mq->qcnt += 1;
-	__set_bit(mqrq->task_id, &mq->qslots);
-
-	return mqrq;
-}
-
-void mmc_queue_req_free(struct mmc_queue *mq,
-			struct mmc_queue_req *mqrq)
-{
-	WARN_ON(!mqrq->req || mq->qcnt < 1 ||
-		!test_bit(mqrq->task_id, &mq->qslots));
-	mqrq->req = NULL;
-	mq->qcnt -= 1;
-	__clear_bit(mqrq->task_id, &mq->qslots);
-}
-
 static int mmc_queue_thread(void *d)
 {
 	struct mmc_queue *mq = d;
@@ -149,11 +120,11 @@ static void mmc_request_fn(struct request_queue *q)
 		wake_up_process(mq->thread);
 }
 
-static struct scatterlist *mmc_alloc_sg(int sg_len)
+static struct scatterlist *mmc_alloc_sg(int sg_len, gfp_t gfp)
 {
 	struct scatterlist *sg;
 
-	sg = kmalloc_array(sg_len, sizeof(*sg), GFP_KERNEL);
+	sg = kmalloc_array(sg_len, sizeof(*sg), gfp);
 	if (sg)
 		sg_init_table(sg, sg_len);
 
@@ -179,80 +150,6 @@ static void mmc_queue_setup_discard(struct request_queue *q,
 		queue_flag_set_unlocked(QUEUE_FLAG_SECERASE, q);
 }
 
-static void mmc_queue_req_free_bufs(struct mmc_queue_req *mqrq)
-{
-	kfree(mqrq->bounce_sg);
-	mqrq->bounce_sg = NULL;
-
-	kfree(mqrq->sg);
-	mqrq->sg = NULL;
-
-	kfree(mqrq->bounce_buf);
-	mqrq->bounce_buf = NULL;
-}
-
-static void mmc_queue_reqs_free_bufs(struct mmc_queue_req *mqrq, int qdepth)
-{
-	int i;
-
-	for (i = 0; i < qdepth; i++)
-		mmc_queue_req_free_bufs(&mqrq[i]);
-}
-
-static void mmc_queue_free_mqrqs(struct mmc_queue_req *mqrq, int qdepth)
-{
-	mmc_queue_reqs_free_bufs(mqrq, qdepth);
-	kfree(mqrq);
-}
-
-static struct mmc_queue_req *mmc_queue_alloc_mqrqs(int qdepth)
-{
-	struct mmc_queue_req *mqrq;
-	int i;
-
-	mqrq = kcalloc(qdepth, sizeof(*mqrq), GFP_KERNEL);
-	if (mqrq) {
-		for (i = 0; i < qdepth; i++)
-			mqrq[i].task_id = i;
-	}
-
-	return mqrq;
-}
-
-static int mmc_queue_alloc_bounce_bufs(struct mmc_queue_req *mqrq, int qdepth,
-				       unsigned int bouncesz)
-{
-	int i;
-
-	for (i = 0; i < qdepth; i++) {
-		mqrq[i].bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
-		if (!mqrq[i].bounce_buf)
-			return -ENOMEM;
-
-		mqrq[i].sg = mmc_alloc_sg(1);
-		if (!mqrq[i].sg)
-			return -ENOMEM;
-
-		mqrq[i].bounce_sg = mmc_alloc_sg(bouncesz / 512);
-		if (!mqrq[i].bounce_sg)
-			return -ENOMEM;
-	}
-
-	return 0;
-}
-
-static bool mmc_queue_alloc_bounce(struct mmc_queue_req *mqrq, int qdepth,
-				   unsigned int bouncesz)
-{
-	int ret;
-
-	ret = mmc_queue_alloc_bounce_bufs(mqrq, qdepth, bouncesz);
-	if (ret)
-		mmc_queue_reqs_free_bufs(mqrq, qdepth);
-
-	return !ret;
-}
-
 static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host)
 {
 	unsigned int bouncesz = MMC_QUEUE_BOUNCESZ;
@@ -273,71 +170,61 @@ static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host)
 	return bouncesz;
 }
 
-static int mmc_queue_alloc_sgs(struct mmc_queue_req *mqrq, int qdepth,
-			       int max_segs)
+/**
+ * mmc_init_request() - initialize the MMC-specific per-request data
+ * @q: the request queue
+ * @req: the request
+ * @gfp: memory allocation policy
+ */
+static int mmc_init_request(struct request_queue *q, struct request *req,
+			    gfp_t gfp)
 {
-	int i;
+	struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req);
+	struct mmc_queue *mq = q->queuedata;
+	struct mmc_card *card = mq->card;
+	struct mmc_host *host = card->host;
 
-	for (i = 0; i < qdepth; i++) {
-		mqrq[i].sg = mmc_alloc_sg(max_segs);
-		if (!mqrq[i].sg)
+	mq_rq->req = req;
+
+	if (card->bouncesz) {
+		mq_rq->bounce_buf = kmalloc(card->bouncesz, gfp);
+		if (!mq_rq->bounce_buf)
+			return -ENOMEM;
+		if (card->bouncesz > 512) {
+			mq_rq->sg = mmc_alloc_sg(1, gfp);
+			if (!mq_rq->sg)
+				return -ENOMEM;
+			mq_rq->bounce_sg = mmc_alloc_sg(card->bouncesz / 512,
+							gfp);
+			if (!mq_rq->bounce_sg)
+				return -ENOMEM;
+		}
+	} else {
+		mq_rq->bounce_buf = NULL;
+		mq_rq->bounce_sg = NULL;
+		mq_rq->sg = mmc_alloc_sg(host->max_segs, gfp);
+		if (!mq_rq->sg)
 			return -ENOMEM;
 	}
 
 	return 0;
 }
 
-void mmc_queue_free_shared_queue(struct mmc_card *card)
+static void mmc_exit_request(struct request_queue *q, struct request *req)
 {
-	if (card->mqrq) {
-		mmc_queue_free_mqrqs(card->mqrq, card->qdepth);
-		card->mqrq = NULL;
-	}
-}
+	struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req);
 
-static int __mmc_queue_alloc_shared_queue(struct mmc_card *card, int qdepth)
-{
-	struct mmc_host *host = card->host;
-	struct mmc_queue_req *mqrq;
-	unsigned int bouncesz;
-	int ret = 0;
-
-	if (card->mqrq)
-		return -EINVAL;
-
-	mqrq = mmc_queue_alloc_mqrqs(qdepth);
-	if (!mqrq)
-		return -ENOMEM;
+	/* It is OK to kfree(NULL) so this will be smooth */
+	kfree(mq_rq->bounce_sg);
+	mq_rq->bounce_sg = NULL;
 
-	card->mqrq = mqrq;
-	card->qdepth = qdepth;
+	kfree(mq_rq->bounce_buf);
+	mq_rq->bounce_buf = NULL;
 
-	bouncesz = mmc_queue_calc_bouncesz(host);
-
-	if (bouncesz && !mmc_queue_alloc_bounce(mqrq, qdepth, bouncesz)) {
-		bouncesz = 0;
-		pr_warn("%s: unable to allocate bounce buffers\n",
-			mmc_card_name(card));
-	}
-
-	card->bouncesz = bouncesz;
-
-	if (!bouncesz) {
-		ret = mmc_queue_alloc_sgs(mqrq, qdepth, host->max_segs);
-		if (ret)
-			goto out_err;
-	}
+	kfree(mq_rq->sg);
+	mq_rq->sg = NULL;
 
-	return ret;
-
-out_err:
-	mmc_queue_free_shared_queue(card);
-	return ret;
-}
-
-int mmc_queue_alloc_shared_queue(struct mmc_card *card)
-{
-	return __mmc_queue_alloc_shared_queue(card, 2);
+	mq_rq->req = NULL;
 }
 
 /**
@@ -360,13 +247,21 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 		limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
 
 	mq->card = card;
-	mq->queue = blk_init_queue(mmc_request_fn, lock);
+	mq->queue = blk_alloc_queue(GFP_KERNEL);
 	if (!mq->queue)
 		return -ENOMEM;
-
-	mq->mqrq = card->mqrq;
-	mq->qdepth = card->qdepth;
+	mq->queue->queue_lock = lock;
+	mq->queue->request_fn = mmc_request_fn;
+	mq->queue->init_rq_fn = mmc_init_request;
+	mq->queue->exit_rq_fn = mmc_exit_request;
+	mq->queue->cmd_size = sizeof(struct mmc_queue_req);
 	mq->queue->queuedata = mq;
+	mq->qcnt = 0;
+	ret = blk_init_allocated_queue(mq->queue);
+	if (ret) {
+		blk_cleanup_queue(mq->queue);
+		return ret;
+	}
 
 	blk_queue_prep_rq(mq->queue, mmc_prep_request);
 	queue_flag_set_unlocked(QUEUE_FLAG_NONROT, mq->queue);
@@ -374,6 +269,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 	if (mmc_can_erase(card))
 		mmc_queue_setup_discard(mq->queue, card);
 
+	card->bouncesz = mmc_queue_calc_bouncesz(host);
 	if (card->bouncesz) {
 		blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_ANY);
 		blk_queue_max_hw_sectors(mq->queue, card->bouncesz / 512);
@@ -400,7 +296,6 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
 	return 0;
 
 cleanup_queue:
-	mq->mqrq = NULL;
 	blk_cleanup_queue(mq->queue);
 	return ret;
 }
@@ -421,8 +316,8 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
 	q->queuedata = NULL;
 	blk_start_queue(q);
 	spin_unlock_irqrestore(q->queue_lock, flags);
+	blk_cleanup_queue(mq->queue);
 
-	mq->mqrq = NULL;
 	mq->card = NULL;
 }
 EXPORT_SYMBOL(mmc_cleanup_queue);
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index 871796c3f406..dae31bc0c2d3 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -3,9 +3,15 @@
 
 #include <linux/types.h>
 #include <linux/blkdev.h>
+#include <linux/blk-mq.h>
 #include <linux/mmc/core.h>
 #include <linux/mmc/host.h>
 
+static inline struct mmc_queue_req *req_to_mmc_queue_req(struct request *rq)
+{
+	return blk_mq_rq_to_pdu(rq);
+}
+
 static inline bool mmc_req_is_special(struct request *req)
 {
 	return req &&
@@ -34,7 +40,6 @@ struct mmc_queue_req {
 	struct scatterlist	*bounce_sg;
 	unsigned int		bounce_sg_len;
 	struct mmc_async_req	areq;
-	int			task_id;
 };
 
 struct mmc_queue {
@@ -45,14 +50,15 @@ struct mmc_queue {
 	bool			asleep;
 	struct mmc_blk_data	*blkdata;
 	struct request_queue	*queue;
-	struct mmc_queue_req	*mqrq;
-	int			qdepth;
+	/*
+	 * FIXME: this counter is not a very reliable way of keeping
+	 * track of how many requests that are ongoing. Switch to just
+	 * letting the block core keep track of requests and per-request
+	 * associated mmc_queue_req data.
+	 */
 	int			qcnt;
-	unsigned long		qslots;
 };
 
-extern int mmc_queue_alloc_shared_queue(struct mmc_card *card);
-extern void mmc_queue_free_shared_queue(struct mmc_card *card);
 extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
 			  const char *);
 extern void mmc_cleanup_queue(struct mmc_queue *);
@@ -66,8 +72,4 @@ extern void mmc_queue_bounce_post(struct mmc_queue_req *);
 
 extern int mmc_access_rpmb(struct mmc_queue *);
 
-extern struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *,
-						struct request *);
-extern void mmc_queue_req_free(struct mmc_queue *, struct mmc_queue_req *);
-
 #endif
diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
index aad015e0152b..46c73e97e61f 100644
--- a/include/linux/mmc/card.h
+++ b/include/linux/mmc/card.h
@@ -305,9 +305,7 @@ struct mmc_card {
 	struct mmc_part	part[MMC_NUM_PHY_PARTITION]; /* physical partitions */
 	unsigned int    nr_parts;
 
-	struct mmc_queue_req	*mqrq;		/* Shared queue structure */
 	unsigned int		bouncesz;	/* Bounce buffer size */
-	int			qdepth;		/* Shared queue depth */
 };
 
 static inline bool mmc_large_sector(struct mmc_card *card)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/6 v2] mmc: block: Tag is_rpmb as bool
  2017-05-18  9:29 [PATCH 1/6 v2] mmc: core: Delete bounce buffer Kconfig option Linus Walleij
  2017-05-18  9:29 ` [PATCH 2/6 v2] mmc: core: Allocate per-request data using the block layer core Linus Walleij
@ 2017-05-18  9:29 ` Linus Walleij
  2017-05-18  9:29 ` [PATCH 4/6 v2] mmc: block: move single ioctl() commands to block requests Linus Walleij
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Linus Walleij @ 2017-05-18  9:29 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson, Adrian Hunter
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Linus Walleij

The variable is_rpmb is clearly a bool and even assigned true
and false, yet declared as an int.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v1->v2:
- No changes, just resending
---
 drivers/mmc/core/block.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 5f29b5625216..f4dab1dfd2ab 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -443,7 +443,7 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
 	struct mmc_request mrq = {};
 	struct scatterlist sg;
 	int err;
-	int is_rpmb = false;
+	bool is_rpmb = false;
 	u32 status = 0;
 
 	if (!card || !md || !idata)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/6 v2] mmc: block: move single ioctl() commands to block requests
  2017-05-18  9:29 [PATCH 1/6 v2] mmc: core: Delete bounce buffer Kconfig option Linus Walleij
  2017-05-18  9:29 ` [PATCH 2/6 v2] mmc: core: Allocate per-request data using the block layer core Linus Walleij
  2017-05-18  9:29 ` [PATCH 3/6 v2] mmc: block: Tag is_rpmb as bool Linus Walleij
@ 2017-05-18  9:29 ` Linus Walleij
  2017-05-18  9:36   ` Christoph Hellwig
  2017-05-18  9:29 ` [PATCH 5/6 v2] mmc: block: move multi-ioctl() to use block layer Linus Walleij
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Linus Walleij @ 2017-05-18  9:29 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson, Adrian Hunter
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Linus Walleij

This wraps single ioctl() commands into block requests using
the custom block layer request types REQ_OP_DRV_IN and
REQ_OP_DRV_OUT.

By doing this we are loosening the grip on the big host lock,
since two calls to mmc_get_card()/mmc_put_card() are removed.

We are storing the ioctl() in/out argument as a pointer in
the per-request struct mmc_blk_request container. Since we
now let the block layer allocate this data, blk_get_request()
will allocate it for us and we can immediately dereference
it and use it to pass the argument into the block layer.

We refactor the if/else/if/else ladder in mmc_blk_issue_rq()
as part of the job, keeping some extra attention to the
case when a NULL req is passed into this function and
making that pipeline flush more explicit.

Tested on the ux500 with the userspace:
mmc extcsd read /dev/mmcblk3
resulting in a successful EXTCSD info dump back to the
console.

This commit fixes a starvation issue in the MMC/SD stack
that can be easily provoked in the following way by
issueing the following commands in sequence:

> dd if=/dev/mmcblk3 of=/dev/null bs=1M &
> mmc extcs read /dev/mmcblk3

Before this patch, the extcsd read command would hang
(starve) while waiting for the dd command to finish since
the block layer was holding the card/host lock.

After this patch, the extcsd ioctl() command is nicely
interpersed with the rest of the block commands and we
can issue a bunch of ioctl()s from userspace while there
is some busy block IO going on without any problems.

Conversely userspace ioctl()s can no longer starve
the block layer by holding the card/host lock.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v1->v2:
- Replace the if/else/if/else nest in mmc_blk_issue_rq()
  with a switch() clause at Ulf's request.
- Update to the API change for req_to_mmc_queue_req()
---
 drivers/mmc/core/block.c | 111 ++++++++++++++++++++++++++++++++++++-----------
 drivers/mmc/core/queue.h |   3 ++
 2 files changed, 88 insertions(+), 26 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index f4dab1dfd2ab..9fb2bd529156 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -564,8 +564,10 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev,
 {
 	struct mmc_blk_ioc_data *idata;
 	struct mmc_blk_data *md;
+	struct mmc_queue *mq;
 	struct mmc_card *card;
 	int err = 0, ioc_err = 0;
+	struct request *req;
 
 	/*
 	 * The caller must have CAP_SYS_RAWIO, and must be calling this on the
@@ -591,17 +593,18 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev,
 		goto cmd_done;
 	}
 
-	mmc_get_card(card);
-
-	ioc_err = __mmc_blk_ioctl_cmd(card, md, idata);
-
-	/* Always switch back to main area after RPMB access */
-	if (md->area_type & MMC_BLK_DATA_AREA_RPMB)
-		mmc_blk_part_switch(card, dev_get_drvdata(&card->dev));
-
-	mmc_put_card(card);
-
+	/*
+	 * Dispatch the ioctl() into the block request queue.
+	 */
+	mq = &md->queue;
+	req = blk_get_request(mq->queue,
+		idata->ic.write_flag ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN,
+		__GFP_RECLAIM);
+	req_to_mmc_queue_req(req)->idata = idata;
+	blk_execute_rq(mq->queue, NULL, req, 0);
+	ioc_err = req_to_mmc_queue_req(req)->ioc_result;
 	err = mmc_blk_ioctl_copy_to_user(ic_ptr, idata);
+	blk_put_request(req);
 
 cmd_done:
 	mmc_blk_put(md);
@@ -611,6 +614,31 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev,
 	return ioc_err ? ioc_err : err;
 }
 
+/*
+ * The ioctl commands come back from the block layer after it queued it and
+ * processed it with all other requests and then they get issued in this
+ * function.
+ */
+static void mmc_blk_ioctl_cmd_issue(struct mmc_queue *mq, struct request *req)
+{
+	struct mmc_queue_req *mq_rq;
+	struct mmc_blk_ioc_data *idata;
+	struct mmc_card *card = mq->card;
+	struct mmc_blk_data *md = mq->blkdata;
+	int ioc_err;
+
+	mq_rq = req_to_mmc_queue_req(req);
+	idata = mq_rq->idata;
+	ioc_err = __mmc_blk_ioctl_cmd(card, md, idata);
+	mq_rq->ioc_result = ioc_err;
+
+	/* Always switch back to main area after RPMB access */
+	if (md->area_type & MMC_BLK_DATA_AREA_RPMB)
+		mmc_blk_part_switch(card, dev_get_drvdata(&card->dev));
+
+	blk_end_request_all(req, ioc_err);
+}
+
 static int mmc_blk_ioctl_multi_cmd(struct block_device *bdev,
 				   struct mmc_ioc_multi_cmd __user *user)
 {
@@ -1854,23 +1882,54 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
 		goto out;
 	}
 
-	if (req && req_op(req) == REQ_OP_DISCARD) {
-		/* complete ongoing async transfer before issuing discard */
-		if (mq->qcnt)
-			mmc_blk_issue_rw_rq(mq, NULL);
-		mmc_blk_issue_discard_rq(mq, req);
-	} else if (req && req_op(req) == REQ_OP_SECURE_ERASE) {
-		/* complete ongoing async transfer before issuing secure erase*/
-		if (mq->qcnt)
-			mmc_blk_issue_rw_rq(mq, NULL);
-		mmc_blk_issue_secdiscard_rq(mq, req);
-	} else if (req && req_op(req) == REQ_OP_FLUSH) {
-		/* complete ongoing async transfer before issuing flush */
-		if (mq->qcnt)
-			mmc_blk_issue_rw_rq(mq, NULL);
-		mmc_blk_issue_flush(mq, req);
+	if (req) {
+		switch (req_op(req)) {
+		case REQ_OP_DRV_IN:
+		case REQ_OP_DRV_OUT:
+			/*
+			 * Complete ongoing async transfer before issuing
+			 * ioctl()s
+			 */
+			if (mq->qcnt)
+				mmc_blk_issue_rw_rq(mq, NULL);
+			mmc_blk_ioctl_cmd_issue(mq, req);
+			break;
+		case REQ_OP_DISCARD:
+			/*
+			 * Complete ongoing async transfer before issuing
+			 * discard.
+			 */
+			if (mq->qcnt)
+				mmc_blk_issue_rw_rq(mq, NULL);
+			mmc_blk_issue_discard_rq(mq, req);
+			break;
+		case REQ_OP_SECURE_ERASE:
+			/*
+			 * Complete ongoing async transfer before issuing
+			 * secure erase.
+			 */
+			if (mq->qcnt)
+				mmc_blk_issue_rw_rq(mq, NULL);
+			mmc_blk_issue_secdiscard_rq(mq, req);
+			break;
+		case REQ_OP_FLUSH:
+			/*
+			 * Complete ongoing async transfer before issuing
+			 * flush.
+			 */
+			if (mq->qcnt)
+				mmc_blk_issue_rw_rq(mq, NULL);
+			mmc_blk_issue_flush(mq, req);
+			break;
+		default:
+			/* Normal request, just issue it */
+			mmc_blk_issue_rw_rq(mq, req);
+			card->host->context_info.is_waiting_last_req = false;
+			break;
+		};
 	} else {
-		mmc_blk_issue_rw_rq(mq, req);
+		/* No request, flushing the pipeline with NULL */
+		mmc_blk_issue_rw_rq(mq, NULL);
 		card->host->context_info.is_waiting_last_req = false;
 	}
 
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index dae31bc0c2d3..005ece9ac7cb 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -22,6 +22,7 @@ static inline bool mmc_req_is_special(struct request *req)
 
 struct task_struct;
 struct mmc_blk_data;
+struct mmc_blk_ioc_data;
 
 struct mmc_blk_request {
 	struct mmc_request	mrq;
@@ -40,6 +41,8 @@ struct mmc_queue_req {
 	struct scatterlist	*bounce_sg;
 	unsigned int		bounce_sg_len;
 	struct mmc_async_req	areq;
+	int			ioc_result;
+	struct mmc_blk_ioc_data	*idata;
 };
 
 struct mmc_queue {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/6 v2] mmc: block: move multi-ioctl() to use block layer
  2017-05-18  9:29 [PATCH 1/6 v2] mmc: core: Delete bounce buffer Kconfig option Linus Walleij
                   ` (2 preceding siblings ...)
  2017-05-18  9:29 ` [PATCH 4/6 v2] mmc: block: move single ioctl() commands to block requests Linus Walleij
@ 2017-05-18  9:29 ` Linus Walleij
  2017-05-18  9:29 ` [PATCH 6/6 v2] mmc: queue: delete mmc_req_is_special() Linus Walleij
  2017-05-19  8:30 ` [PATCH 1/6 v2] mmc: core: Delete bounce buffer Kconfig option Ulf Hansson
  5 siblings, 0 replies; 13+ messages in thread
From: Linus Walleij @ 2017-05-18  9:29 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson, Adrian Hunter
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Linus Walleij

This switches also the multiple-command ioctl() call to issue
all ioctl()s through the block layer instead of going directly
to the device.

We extend the passed argument with an argument count and loop
over all passed commands in the ioctl() issue function called
from the block layer.

By doing this we are again loosening the grip on the big host
lock, since two calls to mmc_get_card()/mmc_put_card() are
removed.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v1->v2:
- - Update to the API change for req_to_mmc_queue_req()
---
 drivers/mmc/core/block.c | 38 +++++++++++++++++++++++++-------------
 drivers/mmc/core/queue.h |  3 ++-
 2 files changed, 27 insertions(+), 14 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 9fb2bd529156..e9737987956f 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -563,6 +563,7 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev,
 			     struct mmc_ioc_cmd __user *ic_ptr)
 {
 	struct mmc_blk_ioc_data *idata;
+	struct mmc_blk_ioc_data *idatas[1];
 	struct mmc_blk_data *md;
 	struct mmc_queue *mq;
 	struct mmc_card *card;
@@ -600,7 +601,9 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev,
 	req = blk_get_request(mq->queue,
 		idata->ic.write_flag ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN,
 		__GFP_RECLAIM);
-	req_to_mmc_queue_req(req)->idata = idata;
+	idatas[0] = idata;
+	req_to_mmc_queue_req(req)->idata = idatas;
+	req_to_mmc_queue_req(req)->ioc_count = 1;
 	blk_execute_rq(mq->queue, NULL, req, 0);
 	ioc_err = req_to_mmc_queue_req(req)->ioc_result;
 	err = mmc_blk_ioctl_copy_to_user(ic_ptr, idata);
@@ -622,14 +625,17 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev,
 static void mmc_blk_ioctl_cmd_issue(struct mmc_queue *mq, struct request *req)
 {
 	struct mmc_queue_req *mq_rq;
-	struct mmc_blk_ioc_data *idata;
 	struct mmc_card *card = mq->card;
 	struct mmc_blk_data *md = mq->blkdata;
 	int ioc_err;
+	int i;
 
 	mq_rq = req_to_mmc_queue_req(req);
-	idata = mq_rq->idata;
-	ioc_err = __mmc_blk_ioctl_cmd(card, md, idata);
+	for (i = 0; i < mq_rq->ioc_count; i++) {
+		ioc_err = __mmc_blk_ioctl_cmd(card, md, mq_rq->idata[i]);
+		if (ioc_err)
+			break;
+	}
 	mq_rq->ioc_result = ioc_err;
 
 	/* Always switch back to main area after RPMB access */
@@ -646,8 +652,10 @@ static int mmc_blk_ioctl_multi_cmd(struct block_device *bdev,
 	struct mmc_ioc_cmd __user *cmds = user->cmds;
 	struct mmc_card *card;
 	struct mmc_blk_data *md;
+	struct mmc_queue *mq;
 	int i, err = 0, ioc_err = 0;
 	__u64 num_of_cmds;
+	struct request *req;
 
 	/*
 	 * The caller must have CAP_SYS_RAWIO, and must be calling this on the
@@ -689,21 +697,25 @@ static int mmc_blk_ioctl_multi_cmd(struct block_device *bdev,
 		goto cmd_done;
 	}
 
-	mmc_get_card(card);
-
-	for (i = 0; i < num_of_cmds && !ioc_err; i++)
-		ioc_err = __mmc_blk_ioctl_cmd(card, md, idata[i]);
-
-	/* Always switch back to main area after RPMB access */
-	if (md->area_type & MMC_BLK_DATA_AREA_RPMB)
-		mmc_blk_part_switch(card, dev_get_drvdata(&card->dev));
 
-	mmc_put_card(card);
+	/*
+	 * Dispatch the ioctl()s into the block request queue.
+	 */
+	mq = &md->queue;
+	req = blk_get_request(mq->queue,
+		idata[0]->ic.write_flag ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN,
+		__GFP_RECLAIM);
+	req_to_mmc_queue_req(req)->idata = idata;
+	req_to_mmc_queue_req(req)->ioc_count = num_of_cmds;
+	blk_execute_rq(mq->queue, NULL, req, 0);
+	ioc_err = req_to_mmc_queue_req(req)->ioc_result;
 
 	/* copy to user if data and response */
 	for (i = 0; i < num_of_cmds && !err; i++)
 		err = mmc_blk_ioctl_copy_to_user(&cmds[i], idata[i]);
 
+	blk_put_request(req);
+
 cmd_done:
 	mmc_blk_put(md);
 cmd_err:
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index 005ece9ac7cb..8c76e7118c95 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -42,7 +42,8 @@ struct mmc_queue_req {
 	unsigned int		bounce_sg_len;
 	struct mmc_async_req	areq;
 	int			ioc_result;
-	struct mmc_blk_ioc_data	*idata;
+	struct mmc_blk_ioc_data	**idata;
+	unsigned int		ioc_count;
 };
 
 struct mmc_queue {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 6/6 v2] mmc: queue: delete mmc_req_is_special()
  2017-05-18  9:29 [PATCH 1/6 v2] mmc: core: Delete bounce buffer Kconfig option Linus Walleij
                   ` (3 preceding siblings ...)
  2017-05-18  9:29 ` [PATCH 5/6 v2] mmc: block: move multi-ioctl() to use block layer Linus Walleij
@ 2017-05-18  9:29 ` Linus Walleij
  2017-05-19  8:30 ` [PATCH 1/6 v2] mmc: core: Delete bounce buffer Kconfig option Ulf Hansson
  5 siblings, 0 replies; 13+ messages in thread
From: Linus Walleij @ 2017-05-18  9:29 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson, Adrian Hunter
  Cc: linux-block, Jens Axboe, Christoph Hellwig, Arnd Bergmann,
	Bartlomiej Zolnierkiewicz, Paolo Valente, Linus Walleij

commit cdf8a6fb48882651049e468e6b16956fb83db86c
"mmc: block: Introduce queue semantics"
deleted the last user of mmc_req_is_special() and it was
a horrible hack to classify requests as "special" or
"not special" to begin with, so delete the helper.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
ChangeLog v1->v2:
- No changes, just include this patch with in my
  series.
---
 drivers/mmc/core/queue.h | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index 8c76e7118c95..dfe481a8b5ed 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -12,14 +12,6 @@ static inline struct mmc_queue_req *req_to_mmc_queue_req(struct request *rq)
 	return blk_mq_rq_to_pdu(rq);
 }
 
-static inline bool mmc_req_is_special(struct request *req)
-{
-	return req &&
-		(req_op(req) == REQ_OP_FLUSH ||
-		 req_op(req) == REQ_OP_DISCARD ||
-		 req_op(req) == REQ_OP_SECURE_ERASE);
-}
-
 struct task_struct;
 struct mmc_blk_data;
 struct mmc_blk_ioc_data;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/6 v2] mmc: core: Allocate per-request data using the block layer core
  2017-05-18  9:29 ` [PATCH 2/6 v2] mmc: core: Allocate per-request data using the block layer core Linus Walleij
@ 2017-05-18  9:32   ` Christoph Hellwig
  2017-05-18 12:39     ` Linus Walleij
  0 siblings, 1 reply; 13+ messages in thread
From: Christoph Hellwig @ 2017-05-18  9:32 UTC (permalink / raw)
  To: Linus Walleij
  Cc: linux-mmc, Ulf Hansson, Adrian Hunter, linux-block, Jens Axboe,
	Christoph Hellwig, Arnd Bergmann, Bartlomiej Zolnierkiewicz,
	Paolo Valente

Btw, you can also remove the struct request backpointer in
struct mmc_queue_req now - blk_mq_rq_from_pdu will do it for you
without the need for a pointer.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/6 v2] mmc: block: move single ioctl() commands to block requests
  2017-05-18  9:29 ` [PATCH 4/6 v2] mmc: block: move single ioctl() commands to block requests Linus Walleij
@ 2017-05-18  9:36   ` Christoph Hellwig
  2017-07-05 19:00     ` Christoph Hellwig
  0 siblings, 1 reply; 13+ messages in thread
From: Christoph Hellwig @ 2017-05-18  9:36 UTC (permalink / raw)
  To: Linus Walleij
  Cc: linux-mmc, Ulf Hansson, Adrian Hunter, linux-block, Jens Axboe,
	Christoph Hellwig, Arnd Bergmann, Bartlomiej Zolnierkiewicz,
	Paolo Valente

On Thu, May 18, 2017 at 11:29:34AM +0200, Linus Walleij wrote:
> We are storing the ioctl() in/out argument as a pointer in
> the per-request struct mmc_blk_request container.

Btw, for the main ioctl data (not the little reponse field) it might
make sense to use blk_rq_map_user, which will do a get_user_pages
on the user data if the alignment fits, and otherwise handle the
kernel bounce buffering for you.  This should simplify the code
quite a bit more, and in the case where you can access the user
memory directly provide a nice little performance boost.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/6 v2] mmc: core: Allocate per-request data using the block layer core
  2017-05-18  9:32   ` Christoph Hellwig
@ 2017-05-18 12:39     ` Linus Walleij
  0 siblings, 0 replies; 13+ messages in thread
From: Linus Walleij @ 2017-05-18 12:39 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-mmc, Ulf Hansson, Adrian Hunter, linux-block, Jens Axboe,
	Arnd Bergmann, Bartlomiej Zolnierkiewicz, Paolo Valente

On Thu, May 18, 2017 at 11:32 AM, Christoph Hellwig <hch@lst.de> wrote:

> Btw, you can also remove the struct request backpointer in
> struct mmc_queue_req now - blk_mq_rq_from_pdu will do it for you
> without the need for a pointer.

Thanks I made a patch for this in the front of my next clean-up
series.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/6 v2] mmc: core: Delete bounce buffer Kconfig option
  2017-05-18  9:29 [PATCH 1/6 v2] mmc: core: Delete bounce buffer Kconfig option Linus Walleij
                   ` (4 preceding siblings ...)
  2017-05-18  9:29 ` [PATCH 6/6 v2] mmc: queue: delete mmc_req_is_special() Linus Walleij
@ 2017-05-19  8:30 ` Ulf Hansson
  2017-05-19 13:56   ` Linus Walleij
  5 siblings, 1 reply; 13+ messages in thread
From: Ulf Hansson @ 2017-05-19  8:30 UTC (permalink / raw)
  To: Linus Walleij
  Cc: linux-mmc, Adrian Hunter, linux-block, Jens Axboe,
	Christoph Hellwig, Arnd Bergmann, Bartlomiej Zolnierkiewicz,
	Paolo Valente

On 18 May 2017 at 11:29, Linus Walleij <linus.walleij@linaro.org> wrote:
> This option is activated by all multiplatform configs and what
> not so we almost always have it turned on, and the memory it
> saves is negligible, even more so moving forward. The actual
> bounce buffer only gets allocated only when used, the only
> thing the ifdefs are saving is a little bit of code.
>
> It is highly improper to have this as a Kconfig option that
> get turned on by Kconfig, make this a pure runtime-thing and
> let the host decide whether we use bounce buffers. We add a
> new property "disable_bounce" to the host struct.
>
> Notice that mmc_queue_calc_bouncesz() already disables the
> bounce buffers if host->max_segs != 1, so any arch that has a
> maximum number of segments higher than 1 will have bounce
> buffers disabled.
>
> The option CONFIG_MMC_BLOCK_BOUNCE is default y so the
> majority of platforms in the kernel already have it on, and
> it then gets turned off at runtime since most of these have
> a host->max_segs > 1. The few exceptions that have
> host->max_segs == 1 and still turn off the bounce buffering
> are those that disable it in their defconfig.
>
> Those are the following:
>
> arch/arm/configs/colibri_pxa300_defconfig
> arch/arm/configs/zeus_defconfig
> - Uses MMC_PXA, drivers/mmc/host/pxamci.c
> - Sets host->max_segs = NR_SG, which is 1
> - This needs its bounce buffer deactivated so we set
>   host->disable_bounce to true in the host driver
>
> arch/arm/configs/davinci_all_defconfig
> - Uses MMC_DAVINCI, drivers/mmc/host/davinci_mmc.c
> - This driver sets host->max_segs to MAX_NR_SG, which is 16
> - That means this driver anyways disabled bounce buffers
> - No special action needed for this platform
>
> arch/arm/configs/lpc32xx_defconfig
> arch/arm/configs/nhk8815_defconfig
> arch/arm/configs/u300_defconfig
> - Uses MMC_ARMMMCI, drivers/mmc/host/mmci.[c|h]
> - This driver by default sets host->max_segs to NR_SG,
>   which is 128, unless a DMA engine is used, and in that case
>   the number of segments are also > 1
> - That means this driver already disables bounce buffers
> - No special action needed for these platforms
>
> arch/arm/configs/sama5_defconfig
> - Uses MMC_SDHCI, MMC_SDHCI_PLTFM, MMC_SDHCI_OF_AT91, MMC_ATMELMCI
> - Uses drivers/mmc/host/sdhci.c
> - Normally sets host->max_segs to SDHCI_MAX_SEGS which is 128 and
>   thus disables bounce buffers
> - Sets host->max_segs to 1 if SDHCI_USE_SDMA is set
> - SDHCI_USE_SDMA is only set by SDHCI on PCI adapers
> - That means that for this platform bounce buffers are already
>   disabled at runtime
> - No special action needed for this platform
>
> arch/blackfin/configs/CM-BF533_defconfig
> arch/blackfin/configs/CM-BF537E_defconfig
> - Uses MMC_SPI (a simple MMC card connected on SPI pins)
> - Uses drivers/mmc/host/mmc_spi.c
> - Sets host->max_segs to MMC_SPI_BLOCKSATONCE which is 128
> - That means this platform already disables bounce buffers at
>   runtime
> - No special action needed for these platforms
>
> arch/mips/configs/cavium_octeon_defconfig
> - Uses MMC_CAVIUM_OCTEON, drivers/mmc/host/cavium.c
> - Sets host->max_segs to 16 or 1
> - Setting host->disable_bounce to be sure for the 1 case
>
> arch/mips/configs/qi_lb60_defconfig
> - Uses MMC_JZ4740, drivers/mmc/host/jz4740_mmc.c
> - This sets host->max_segs to 128 so bounce buffers are
>   already runtime disabled
> - No action needed for this platform
>
> It would be interesting to come up with a list of the platforms
> that actually end up using bounce buffers. I have not been
> able to infer such a list, but it occurs when
> host->max_segs == 1 and the bounce buffering is not explicitly
> disabled.
>
> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>

Thanks, the *series* applied for next! (Responding to patch1 as
couldn't find the cover-letter for v2).

Kind regards
Uffe

> ---
> ChangeLog v1->v2:
> - Instead of adding a new bool "disable_bounce" we use the host
>   caps variable, reuse the free bit 21 to indicate that bounce
>   buffers should be disabled on the host.
> ---
>  drivers/mmc/core/Kconfig  | 18 ------------------
>  drivers/mmc/core/queue.c  | 15 +--------------
>  drivers/mmc/host/cavium.c |  4 +++-
>  drivers/mmc/host/pxamci.c |  6 +++++-
>  include/linux/mmc/host.h  |  1 +
>  5 files changed, 10 insertions(+), 34 deletions(-)
>
> diff --git a/drivers/mmc/core/Kconfig b/drivers/mmc/core/Kconfig
> index fc1ecdaaa9ca..42e89060cd41 100644
> --- a/drivers/mmc/core/Kconfig
> +++ b/drivers/mmc/core/Kconfig
> @@ -61,24 +61,6 @@ config MMC_BLOCK_MINORS
>
>           If unsure, say 8 here.
>
> -config MMC_BLOCK_BOUNCE
> -       bool "Use bounce buffer for simple hosts"
> -       depends on MMC_BLOCK
> -       default y
> -       help
> -         SD/MMC is a high latency protocol where it is crucial to
> -         send large requests in order to get high performance. Many
> -         controllers, however, are restricted to continuous memory
> -         (i.e. they can't do scatter-gather), something the kernel
> -         rarely can provide.
> -
> -         Say Y here to help these restricted hosts by bouncing
> -         requests back and forth from a large buffer. You will get
> -         a big performance gain at the cost of up to 64 KiB of
> -         physical memory.
> -
> -         If unsure, say Y here.
> -
>  config SDIO_UART
>         tristate "SDIO UART/GPS class support"
>         depends on TTY
> diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
> index 5c37b6be3e7b..70ba7f94c706 100644
> --- a/drivers/mmc/core/queue.c
> +++ b/drivers/mmc/core/queue.c
> @@ -219,7 +219,6 @@ static struct mmc_queue_req *mmc_queue_alloc_mqrqs(int qdepth)
>         return mqrq;
>  }
>
> -#ifdef CONFIG_MMC_BLOCK_BOUNCE
>  static int mmc_queue_alloc_bounce_bufs(struct mmc_queue_req *mqrq, int qdepth,
>                                        unsigned int bouncesz)
>  {
> @@ -258,7 +257,7 @@ static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host)
>  {
>         unsigned int bouncesz = MMC_QUEUE_BOUNCESZ;
>
> -       if (host->max_segs != 1)
> +       if (host->max_segs != 1 || (host->caps & MMC_CAP_NO_BOUNCE_BUFF))
>                 return 0;
>
>         if (bouncesz > host->max_req_size)
> @@ -273,18 +272,6 @@ static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host)
>
>         return bouncesz;
>  }
> -#else
> -static inline bool mmc_queue_alloc_bounce(struct mmc_queue_req *mqrq,
> -                                         int qdepth, unsigned int bouncesz)
> -{
> -       return false;
> -}
> -
> -static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host)
> -{
> -       return 0;
> -}
> -#endif
>
>  static int mmc_queue_alloc_sgs(struct mmc_queue_req *mqrq, int qdepth,
>                                int max_segs)
> diff --git a/drivers/mmc/host/cavium.c b/drivers/mmc/host/cavium.c
> index 58b51ba6aabd..9c1575f7c1fb 100644
> --- a/drivers/mmc/host/cavium.c
> +++ b/drivers/mmc/host/cavium.c
> @@ -1040,10 +1040,12 @@ int cvm_mmc_of_slot_probe(struct device *dev, struct cvm_mmc_host *host)
>          * We only have a 3.3v supply, we cannot support any
>          * of the UHS modes. We do support the high speed DDR
>          * modes up to 52MHz.
> +        *
> +        * Disable bounce buffers for max_segs = 1
>          */
>         mmc->caps |= MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED |
>                      MMC_CAP_ERASE | MMC_CAP_CMD23 | MMC_CAP_POWER_OFF_CARD |
> -                    MMC_CAP_3_3V_DDR;
> +                    MMC_CAP_3_3V_DDR | MMC_CAP_NO_BOUNCE_BUFF;
>
>         if (host->use_sg)
>                 mmc->max_segs = 16;
> diff --git a/drivers/mmc/host/pxamci.c b/drivers/mmc/host/pxamci.c
> index c763b404510f..59ab194cb009 100644
> --- a/drivers/mmc/host/pxamci.c
> +++ b/drivers/mmc/host/pxamci.c
> @@ -702,7 +702,11 @@ static int pxamci_probe(struct platform_device *pdev)
>
>         pxamci_init_ocr(host);
>
> -       mmc->caps = 0;
> +       /*
> +        * This architecture used to disable bounce buffers through its
> +        * defconfig, now it is done at runtime as a host property.
> +        */
> +       mmc->caps = MMC_CAP_NO_BOUNCE_BUFF;
>         host->cmdat = 0;
>         if (!cpu_is_pxa25x()) {
>                 mmc->caps |= MMC_CAP_4_BIT_DATA | MMC_CAP_SDIO_IRQ;
> diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
> index 21385ac0c9b1..67f6abe5c3af 100644
> --- a/include/linux/mmc/host.h
> +++ b/include/linux/mmc/host.h
> @@ -270,6 +270,7 @@ struct mmc_host {
>  #define MMC_CAP_UHS_SDR50      (1 << 18)       /* Host supports UHS SDR50 mode */
>  #define MMC_CAP_UHS_SDR104     (1 << 19)       /* Host supports UHS SDR104 mode */
>  #define MMC_CAP_UHS_DDR50      (1 << 20)       /* Host supports UHS DDR50 mode */
> +#define MMC_CAP_NO_BOUNCE_BUFF (1 << 21)       /* Disable bounce buffers on host */
>  #define MMC_CAP_DRIVER_TYPE_A  (1 << 23)       /* Host supports Driver Type A */
>  #define MMC_CAP_DRIVER_TYPE_C  (1 << 24)       /* Host supports Driver Type C */
>  #define MMC_CAP_DRIVER_TYPE_D  (1 << 25)       /* Host supports Driver Type D */
> --
> 2.9.3
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/6 v2] mmc: core: Delete bounce buffer Kconfig option
  2017-05-19  8:30 ` [PATCH 1/6 v2] mmc: core: Delete bounce buffer Kconfig option Ulf Hansson
@ 2017-05-19 13:56   ` Linus Walleij
  0 siblings, 0 replies; 13+ messages in thread
From: Linus Walleij @ 2017-05-19 13:56 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Adrian Hunter, linux-block, Jens Axboe,
	Christoph Hellwig, Arnd Bergmann, Bartlomiej Zolnierkiewicz,
	Paolo Valente

On Fri, May 19, 2017 at 10:30 AM, Ulf Hansson <ulf.hansson@linaro.org> wrote:

> Thanks, the *series* applied for next! (Responding to patch1 as
> couldn't find the cover-letter for v2).

Awesome, and just to make sure you will not be bored in the weekend
I just sent a sequel series expanding the use of per-request datas
to move more host locking and congestion out of the way.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/6 v2] mmc: block: move single ioctl() commands to block requests
  2017-05-18  9:36   ` Christoph Hellwig
@ 2017-07-05 19:00     ` Christoph Hellwig
  2017-07-31 13:44       ` Linus Walleij
  0 siblings, 1 reply; 13+ messages in thread
From: Christoph Hellwig @ 2017-07-05 19:00 UTC (permalink / raw)
  To: Linus Walleij
  Cc: linux-mmc, Ulf Hansson, Adrian Hunter, linux-block, Jens Axboe,
	Christoph Hellwig, Arnd Bergmann, Bartlomiej Zolnierkiewicz,
	Paolo Valente

Hi Linus,

On Thu, May 18, 2017 at 11:36:14AM +0200, Christoph Hellwig wrote:
> On Thu, May 18, 2017 at 11:29:34AM +0200, Linus Walleij wrote:
> > We are storing the ioctl() in/out argument as a pointer in
> > the per-request struct mmc_blk_request container.
> 
> Btw, for the main ioctl data (not the little reponse field) it might
> make sense to use blk_rq_map_user, which will do a get_user_pages
> on the user data if the alignment fits, and otherwise handle the
> kernel bounce buffering for you.  This should simplify the code
> quite a bit more, and in the case where you can access the user
> memory directly provide a nice little performance boost.

Did you get a chance to look into this?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/6 v2] mmc: block: move single ioctl() commands to block requests
  2017-07-05 19:00     ` Christoph Hellwig
@ 2017-07-31 13:44       ` Linus Walleij
  0 siblings, 0 replies; 13+ messages in thread
From: Linus Walleij @ 2017-07-31 13:44 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-mmc, Ulf Hansson, Adrian Hunter, linux-block, Jens Axboe,
	Arnd Bergmann, Bartlomiej Zolnierkiewicz, Paolo Valente

On Wed, Jul 5, 2017 at 9:00 PM, Christoph Hellwig <hch@lst.de> wrote:
> On Thu, May 18, 2017 at 11:36:14AM +0200, Christoph Hellwig wrote:
>> On Thu, May 18, 2017 at 11:29:34AM +0200, Linus Walleij wrote:
>> > We are storing the ioctl() in/out argument as a pointer in
>> > the per-request struct mmc_blk_request container.
>>
>> Btw, for the main ioctl data (not the little reponse field) it might
>> make sense to use blk_rq_map_user, which will do a get_user_pages
>> on the user data if the alignment fits, and otherwise handle the
>> kernel bounce buffering for you.  This should simplify the code
>> quite a bit more, and in the case where you can access the user
>> memory directly provide a nice little performance boost.
>
> Did you get a chance to look into this?

Sorry, just back from vacation.

I am rebasing my MMC patch stack, so I will take this opportunity to
also look at this during the week. I just need to make sure I find the
right userspace calls to exercise it.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2017-07-31 13:44 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-18  9:29 [PATCH 1/6 v2] mmc: core: Delete bounce buffer Kconfig option Linus Walleij
2017-05-18  9:29 ` [PATCH 2/6 v2] mmc: core: Allocate per-request data using the block layer core Linus Walleij
2017-05-18  9:32   ` Christoph Hellwig
2017-05-18 12:39     ` Linus Walleij
2017-05-18  9:29 ` [PATCH 3/6 v2] mmc: block: Tag is_rpmb as bool Linus Walleij
2017-05-18  9:29 ` [PATCH 4/6 v2] mmc: block: move single ioctl() commands to block requests Linus Walleij
2017-05-18  9:36   ` Christoph Hellwig
2017-07-05 19:00     ` Christoph Hellwig
2017-07-31 13:44       ` Linus Walleij
2017-05-18  9:29 ` [PATCH 5/6 v2] mmc: block: move multi-ioctl() to use block layer Linus Walleij
2017-05-18  9:29 ` [PATCH 6/6 v2] mmc: queue: delete mmc_req_is_special() Linus Walleij
2017-05-19  8:30 ` [PATCH 1/6 v2] mmc: core: Delete bounce buffer Kconfig option Ulf Hansson
2017-05-19 13:56   ` Linus Walleij

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.