All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/9] Support limits below the page size
@ 2023-05-22 22:25 Bart Van Assche
  2023-05-22 22:25 ` [PATCH v5 1/9] block: Use pr_info() instead of printk(KERN_INFO ...) Bart Van Assche
                   ` (9 more replies)
  0 siblings, 10 replies; 23+ messages in thread
From: Bart Van Assche @ 2023-05-22 22:25 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, jyescas, mcgrof, Bart Van Assche

Hi Jens,

We want to improve Android performance by increasing the page size from 4 KiB
to 16 KiB. However, some of the storage controllers we care about do not support
DMA segments larger than 4 KiB. Hence the need support for DMA segments that are
smaller than the size of one virtual memory page. This patch series implements
that support. Please consider this patch series for the next merge window.

Thanks,

Bart.

Changes compared to v4:
- Fixed the debugfs patch such that the behavior for creating the block
  debugfs directory is retained.
- Made the description of patch "Support configuring limits below the page
  size" more detailed. Split that patch into two patches.
- Added patch "Use pr_info() instead of printk(KERN_INFO ...)".

Changes compared to v3:
- Removed CONFIG_BLK_SUB_PAGE_SEGMENTS and QUEUE_FLAG_SUB_PAGE_SEGMENTS.
  Replaced these by a new member in struct queue_limits and a static branch.
- The static branch that controls whether or not sub-page limits are enabled
  is set by the block layer core instead of by block drivers.
- Dropped the patches that are no longer needed (SCSI core and UFS Exynos
  driver).

Changes compared to v2:
- For SCSI drivers, only set flag QUEUE_FLAG_SUB_PAGE_SEGMENTS if necessary.
- In the scsi_debug patch, sorted kernel module parameters alphabetically.
  Only set flag QUEUE_FLAG_SUB_PAGE_SEGMENTS if necessary.
- Added a patch for the UFS Exynos driver that enables
  CONFIG_BLK_SUB_PAGE_SEGMENTS if the page size exceeds 4 KiB.

Changes compared to v1:
- Added a CONFIG variable that controls whether or not small segment support
  is enabled.
- Improved patch descriptions.

Bart Van Assche (9):
  block: Use pr_info() instead of printk(KERN_INFO ...)
  block: Prepare for supporting sub-page limits
  block: Support configuring limits below the page size
  block: Make sub_page_limit_queues available in debugfs
  block: Support submitting passthrough requests with small segments
  block: Add support for filesystem requests and small segments
  block: Add support for small segments in blk_rq_map_user_iov()
  scsi_debug: Support configuring the maximum segment size
  null_blk: Support configuring the maximum segment size

 block/blk-core.c                  |  4 ++
 block/blk-map.c                   | 29 +++++++---
 block/blk-merge.c                 |  8 ++-
 block/blk-mq-debugfs.c            |  9 ++++
 block/blk-mq-debugfs.h            |  6 +++
 block/blk-mq.c                    |  2 +
 block/blk-settings.c              | 88 ++++++++++++++++++++++++++-----
 block/blk.h                       | 39 +++++++++++---
 drivers/block/null_blk/main.c     | 19 +++++--
 drivers/block/null_blk/null_blk.h |  1 +
 drivers/scsi/scsi_debug.c         |  4 ++
 include/linux/blkdev.h            |  2 +
 12 files changed, 182 insertions(+), 29 deletions(-)


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v5 1/9] block: Use pr_info() instead of printk(KERN_INFO ...)
  2023-05-22 22:25 [PATCH v5 0/9] Support limits below the page size Bart Van Assche
@ 2023-05-22 22:25 ` Bart Van Assche
  2023-05-22 23:10   ` Luis Chamberlain
  2023-05-22 22:25 ` [PATCH v5 2/9] block: Prepare for supporting sub-page limits Bart Van Assche
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 23+ messages in thread
From: Bart Van Assche @ 2023-05-22 22:25 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, jyescas, mcgrof, Bart Van Assche,
	Ming Lei, Keith Busch

Switch to the modern style of printing kernel messages. Use %u instead
of %d to print unsigned integers.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Keith Busch <kbusch@kernel.org>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/blk-settings.c | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/block/blk-settings.c b/block/blk-settings.c
index 896b4654ab00..1d8d2ae7bdf4 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -127,8 +127,7 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto
 
 	if ((max_hw_sectors << 9) < PAGE_SIZE) {
 		max_hw_sectors = 1 << (PAGE_SHIFT - 9);
-		printk(KERN_INFO "%s: set to minimum %d\n",
-		       __func__, max_hw_sectors);
+		pr_info("%s: set to minimum %u\n", __func__, max_hw_sectors);
 	}
 
 	max_hw_sectors = round_down(max_hw_sectors,
@@ -248,8 +247,7 @@ void blk_queue_max_segments(struct request_queue *q, unsigned short max_segments
 {
 	if (!max_segments) {
 		max_segments = 1;
-		printk(KERN_INFO "%s: set to minimum %d\n",
-		       __func__, max_segments);
+		pr_info("%s: set to minimum %u\n", __func__, max_segments);
 	}
 
 	q->limits.max_segments = max_segments;
@@ -285,8 +283,7 @@ void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size)
 {
 	if (max_size < PAGE_SIZE) {
 		max_size = PAGE_SIZE;
-		printk(KERN_INFO "%s: set to minimum %d\n",
-		       __func__, max_size);
+		pr_info("%s: set to minimum %u\n", __func__, max_size);
 	}
 
 	/* see blk_queue_virt_boundary() for the explanation */
@@ -740,8 +737,7 @@ void blk_queue_segment_boundary(struct request_queue *q, unsigned long mask)
 {
 	if (mask < PAGE_SIZE - 1) {
 		mask = PAGE_SIZE - 1;
-		printk(KERN_INFO "%s: set to minimum %lx\n",
-		       __func__, mask);
+		pr_info("%s: set to minimum %lx\n", __func__, mask);
 	}
 
 	q->limits.seg_boundary_mask = mask;

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 2/9] block: Prepare for supporting sub-page limits
  2023-05-22 22:25 [PATCH v5 0/9] Support limits below the page size Bart Van Assche
  2023-05-22 22:25 ` [PATCH v5 1/9] block: Use pr_info() instead of printk(KERN_INFO ...) Bart Van Assche
@ 2023-05-22 22:25 ` Bart Van Assche
  2023-05-22 23:26   ` Luis Chamberlain
  2023-05-22 22:25 ` [PATCH v5 3/9] block: Support configuring limits below the page size Bart Van Assche
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 23+ messages in thread
From: Bart Van Assche @ 2023-05-22 22:25 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, jyescas, mcgrof, Bart Van Assche,
	Ming Lei, Keith Busch

Introduce variables that represent the lower configuration bounds. This
patch does not change any functionality.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Keith Busch <kbusch@kernel.org>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/blk-settings.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/block/blk-settings.c b/block/blk-settings.c
index 1d8d2ae7bdf4..95d6e836c4a7 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -123,10 +123,11 @@ EXPORT_SYMBOL(blk_queue_bounce_limit);
 void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_sectors)
 {
 	struct queue_limits *limits = &q->limits;
+	unsigned int min_max_hw_sectors = PAGE_SIZE >> SECTOR_SHIFT;
 	unsigned int max_sectors;
 
-	if ((max_hw_sectors << 9) < PAGE_SIZE) {
-		max_hw_sectors = 1 << (PAGE_SHIFT - 9);
+	if (max_hw_sectors < min_max_hw_sectors) {
+		max_hw_sectors = min_max_hw_sectors;
 		pr_info("%s: set to minimum %u\n", __func__, max_hw_sectors);
 	}
 
@@ -281,8 +282,10 @@ EXPORT_SYMBOL_GPL(blk_queue_max_discard_segments);
  **/
 void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size)
 {
-	if (max_size < PAGE_SIZE) {
-		max_size = PAGE_SIZE;
+	unsigned int min_max_segment_size = PAGE_SIZE;
+
+	if (max_size < min_max_segment_size) {
+		max_size = min_max_segment_size;
 		pr_info("%s: set to minimum %u\n", __func__, max_size);
 	}
 

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 3/9] block: Support configuring limits below the page size
  2023-05-22 22:25 [PATCH v5 0/9] Support limits below the page size Bart Van Assche
  2023-05-22 22:25 ` [PATCH v5 1/9] block: Use pr_info() instead of printk(KERN_INFO ...) Bart Van Assche
  2023-05-22 22:25 ` [PATCH v5 2/9] block: Prepare for supporting sub-page limits Bart Van Assche
@ 2023-05-22 22:25 ` Bart Van Assche
  2023-05-27  3:16   ` Luis Chamberlain
  2023-05-22 22:25 ` [PATCH v5 4/9] block: Make sub_page_limit_queues available in debugfs Bart Van Assche
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 23+ messages in thread
From: Bart Van Assche @ 2023-05-22 22:25 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, jyescas, mcgrof, Bart Van Assche,
	Ming Lei, Keith Busch

Allow block drivers to configure the following:
* Maximum number of hardware sectors values smaller than
  PAGE_SIZE >> SECTOR_SHIFT. For PAGE_SIZE = 4096 this means that values
  below 8 become supported.
* A maximum segment size below the page size. This is most useful
  for page sizes above 4096 bytes.

The blk_sub_page_segments static branch will be used in later patches to
prevent that performance of block drivers that support segments >=
PAGE_SIZE and max_hw_sectors >= PAGE_SIZE >> SECTOR_SHIFT would be affected.

This patch may change the behavior of existing block drivers from not
working into working. If a block driver calls
blk_queue_max_hw_sectors() or blk_queue_max_segment_size(), this is
usually done to configure the maximum supported limits. An attempt to
configure a limit below what is supported by the block layer causes the
block layer to select a larger value. If that value is not supported by
the block driver, this may cause other data to be transferred than
requested, a kernel crash or other undesirable behavior.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Keith Busch <kbusch@kernel.org>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/blk-core.c       |  2 ++
 block/blk-settings.c   | 57 ++++++++++++++++++++++++++++++++++++++++++
 block/blk.h            |  9 +++++++
 include/linux/blkdev.h |  2 ++
 4 files changed, 70 insertions(+)

diff --git a/block/blk-core.c b/block/blk-core.c
index 00c74330fa92..814bfb9c9489 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -264,6 +264,8 @@ static void blk_free_queue_rcu(struct rcu_head *rcu_head)
 static void blk_free_queue(struct request_queue *q)
 {
 	blk_free_queue_stats(q->stats);
+	blk_disable_sub_page_limits(&q->limits);
+
 	if (queue_is_mq(q))
 		blk_mq_release(q);
 
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 95d6e836c4a7..a4ef1dfeef76 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -19,6 +19,11 @@
 #include "blk-rq-qos.h"
 #include "blk-wbt.h"
 
+/* Protects blk_nr_sub_page_limit_queues and blk_sub_page_limits changes. */
+static DEFINE_MUTEX(blk_sub_page_limit_lock);
+static uint32_t blk_nr_sub_page_limit_queues;
+DEFINE_STATIC_KEY_FALSE(blk_sub_page_limits);
+
 void blk_queue_rq_timeout(struct request_queue *q, unsigned int timeout)
 {
 	q->rq_timeout = timeout;
@@ -59,6 +64,7 @@ void blk_set_default_limits(struct queue_limits *lim)
 	lim->zoned = BLK_ZONED_NONE;
 	lim->zone_write_granularity = 0;
 	lim->dma_alignment = 511;
+	lim->sub_page_limits = false;
 }
 
 /**
@@ -101,6 +107,47 @@ void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce)
 }
 EXPORT_SYMBOL(blk_queue_bounce_limit);
 
+/**
+ * blk_enable_sub_page_limits - enable support for max_segment_size values smaller than PAGE_SIZE and for max_hw_sectors values below PAGE_SIZE >> SECTOR_SHIFT
+ * @lim: request queue limits for which to enable support of these features.
+ *
+ * Support for these features is not enabled all the time because of the
+ * runtime overhead of these features.
+ */
+static void blk_enable_sub_page_limits(struct queue_limits *lim)
+{
+	if (lim->sub_page_limits)
+		return;
+
+	lim->sub_page_limits = true;
+
+	mutex_lock(&blk_sub_page_limit_lock);
+	if (++blk_nr_sub_page_limit_queues == 1)
+		static_branch_enable(&blk_sub_page_limits);
+	mutex_unlock(&blk_sub_page_limit_lock);
+}
+
+/**
+ * blk_disable_sub_page_limits - disable support for max_segment_size values smaller than PAGE_SIZE and for max_hw_sectors values below PAGE_SIZE >> SECTOR_SHIFT
+ * @lim: request queue limits for which to enable support of these features.
+ *
+ * Support for these features is not enabled all the time because of the
+ * runtime overhead of these features.
+ */
+void blk_disable_sub_page_limits(struct queue_limits *lim)
+{
+	if (!lim->sub_page_limits)
+		return;
+
+	lim->sub_page_limits = false;
+
+	mutex_lock(&blk_sub_page_limit_lock);
+	WARN_ON_ONCE(blk_nr_sub_page_limit_queues <= 0);
+	if (--blk_nr_sub_page_limit_queues == 0)
+		static_branch_disable(&blk_sub_page_limits);
+	mutex_unlock(&blk_sub_page_limit_lock);
+}
+
 /**
  * blk_queue_max_hw_sectors - set max sectors for a request for this queue
  * @q:  the request queue for the device
@@ -126,6 +173,11 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto
 	unsigned int min_max_hw_sectors = PAGE_SIZE >> SECTOR_SHIFT;
 	unsigned int max_sectors;
 
+	if (max_hw_sectors < min_max_hw_sectors) {
+		blk_enable_sub_page_limits(limits);
+		min_max_hw_sectors = 1;
+	}
+
 	if (max_hw_sectors < min_max_hw_sectors) {
 		max_hw_sectors = min_max_hw_sectors;
 		pr_info("%s: set to minimum %u\n", __func__, max_hw_sectors);
@@ -284,6 +336,11 @@ void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size)
 {
 	unsigned int min_max_segment_size = PAGE_SIZE;
 
+	if (max_size < min_max_segment_size) {
+		blk_enable_sub_page_limits(&q->limits);
+		min_max_segment_size = SECTOR_SIZE;
+	}
+
 	if (max_size < min_max_segment_size) {
 		max_size = min_max_segment_size;
 		pr_info("%s: set to minimum %u\n", __func__, max_size);
diff --git a/block/blk.h b/block/blk.h
index 9f171b8f1e34..49526127ea08 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -13,6 +13,7 @@ struct elevator_type;
 #define BLK_MAX_TIMEOUT		(5 * HZ)
 
 extern struct dentry *blk_debugfs_root;
+DECLARE_STATIC_KEY_FALSE(blk_sub_page_limits);
 
 struct blk_flush_queue {
 	unsigned int		flush_pending_idx:1;
@@ -32,6 +33,14 @@ struct blk_flush_queue *blk_alloc_flush_queue(int node, int cmd_size,
 					      gfp_t flags);
 void blk_free_flush_queue(struct blk_flush_queue *q);
 
+static inline bool blk_queue_sub_page_limits(const struct queue_limits *lim)
+{
+	return static_branch_unlikely(&blk_sub_page_limits) &&
+		lim->sub_page_limits;
+}
+
+void blk_disable_sub_page_limits(struct queue_limits *q);
+
 void blk_freeze_queue(struct request_queue *q);
 void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic);
 void blk_queue_start_drain(struct request_queue *q);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index fe99948688df..e54fbb124efb 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -310,6 +310,8 @@ struct queue_limits {
 	 * due to possible offsets.
 	 */
 	unsigned int		dma_alignment;
+
+	bool			sub_page_limits;
 };
 
 typedef int (*report_zones_cb)(struct blk_zone *zone, unsigned int idx,

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 4/9] block: Make sub_page_limit_queues available in debugfs
  2023-05-22 22:25 [PATCH v5 0/9] Support limits below the page size Bart Van Assche
                   ` (2 preceding siblings ...)
  2023-05-22 22:25 ` [PATCH v5 3/9] block: Support configuring limits below the page size Bart Van Assche
@ 2023-05-22 22:25 ` Bart Van Assche
  2023-05-27  3:17   ` Luis Chamberlain
  2023-05-22 22:25 ` [PATCH v5 5/9] block: Support submitting passthrough requests with small segments Bart Van Assche
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 23+ messages in thread
From: Bart Van Assche @ 2023-05-22 22:25 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, jyescas, mcgrof, Bart Van Assche,
	Ming Lei, Keith Busch

This new debugfs attribute makes it easier to verify the code that tracks
how many queues require limits below the page size.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Keith Busch <kbusch@kernel.org>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/blk-core.c       | 2 ++
 block/blk-mq-debugfs.c | 9 +++++++++
 block/blk-mq-debugfs.h | 6 ++++++
 block/blk-settings.c   | 8 ++++++++
 block/blk.h            | 1 +
 5 files changed, 26 insertions(+)

diff --git a/block/blk-core.c b/block/blk-core.c
index 814bfb9c9489..a6726a64d8bc 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -45,6 +45,7 @@
 #include <trace/events/block.h>
 
 #include "blk.h"
+#include "blk-mq-debugfs.h"
 #include "blk-mq-sched.h"
 #include "blk-pm.h"
 #include "blk-cgroup.h"
@@ -1203,6 +1204,7 @@ int __init blk_dev_init(void)
 			sizeof(struct request_queue), 0, SLAB_PANIC, NULL);
 
 	blk_debugfs_root = debugfs_create_dir("block", NULL);
+	blk_mq_debugfs_init();
 
 	return 0;
 }
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 68165a50951b..30db8db91e07 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -846,3 +846,12 @@ void blk_mq_debugfs_unregister_sched_hctx(struct blk_mq_hw_ctx *hctx)
 	debugfs_remove_recursive(hctx->sched_debugfs_dir);
 	hctx->sched_debugfs_dir = NULL;
 }
+
+DEFINE_DEBUGFS_ATTRIBUTE(blk_sub_page_limit_queues_fops,
+			blk_sub_page_limit_queues_get, NULL, "%llu\n");
+
+void blk_mq_debugfs_init(void)
+{
+	debugfs_create_file("sub_page_limit_queues", 0400, blk_debugfs_root,
+			    NULL, &blk_sub_page_limit_queues_fops);
+}
diff --git a/block/blk-mq-debugfs.h b/block/blk-mq-debugfs.h
index 9c7d4b6117d4..7942119051f5 100644
--- a/block/blk-mq-debugfs.h
+++ b/block/blk-mq-debugfs.h
@@ -17,6 +17,8 @@ struct blk_mq_debugfs_attr {
 	const struct seq_operations *seq_ops;
 };
 
+void blk_mq_debugfs_init(void);
+
 int __blk_mq_debugfs_rq_show(struct seq_file *m, struct request *rq);
 int blk_mq_debugfs_rq_show(struct seq_file *m, void *v);
 
@@ -36,6 +38,10 @@ void blk_mq_debugfs_unregister_sched_hctx(struct blk_mq_hw_ctx *hctx);
 void blk_mq_debugfs_register_rqos(struct rq_qos *rqos);
 void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos);
 #else
+static inline void blk_mq_debugfs_init(void)
+{
+}
+
 static inline void blk_mq_debugfs_register(struct request_queue *q)
 {
 }
diff --git a/block/blk-settings.c b/block/blk-settings.c
index a4ef1dfeef76..5cd95a3785fd 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -107,6 +107,14 @@ void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce)
 }
 EXPORT_SYMBOL(blk_queue_bounce_limit);
 
+/* For debugfs. */
+int blk_sub_page_limit_queues_get(void *data, u64 *val)
+{
+	*val = READ_ONCE(blk_nr_sub_page_limit_queues);
+
+	return 0;
+}
+
 /**
  * blk_enable_sub_page_limits - enable support for max_segment_size values smaller than PAGE_SIZE and for max_hw_sectors values below PAGE_SIZE >> SECTOR_SHIFT
  * @lim: request queue limits for which to enable support of these features.
diff --git a/block/blk.h b/block/blk.h
index 49526127ea08..3c63ec0f1721 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -39,6 +39,7 @@ static inline bool blk_queue_sub_page_limits(const struct queue_limits *lim)
 		lim->sub_page_limits;
 }
 
+int blk_sub_page_limit_queues_get(void *data, u64 *val);
 void blk_disable_sub_page_limits(struct queue_limits *q);
 
 void blk_freeze_queue(struct request_queue *q);

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 5/9] block: Support submitting passthrough requests with small segments
  2023-05-22 22:25 [PATCH v5 0/9] Support limits below the page size Bart Van Assche
                   ` (3 preceding siblings ...)
  2023-05-22 22:25 ` [PATCH v5 4/9] block: Make sub_page_limit_queues available in debugfs Bart Van Assche
@ 2023-05-22 22:25 ` Bart Van Assche
  2023-05-22 22:25 ` [PATCH v5 6/9] block: Add support for filesystem requests and " Bart Van Assche
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: Bart Van Assche @ 2023-05-22 22:25 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, jyescas, mcgrof, Bart Van Assche,
	Ming Lei, Keith Busch

If the segment size is smaller than the page size there may be multiple
segments per bvec even if a bvec only contains a single page. Hence this
patch.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Keith Busch <kbusch@kernel.org>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/blk-map.c |  2 +-
 block/blk.h     | 18 ++++++++++++++++++
 2 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/block/blk-map.c b/block/blk-map.c
index 04c55f1c492e..e1355331019a 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -535,7 +535,7 @@ int blk_rq_append_bio(struct request *rq, struct bio *bio)
 	unsigned int nr_segs = 0;
 
 	bio_for_each_bvec(bv, bio, iter)
-		nr_segs++;
+		nr_segs += blk_segments(&rq->q->limits, bv.bv_len);
 
 	if (!rq->bio) {
 		blk_rq_bio_prep(rq, bio, nr_segs);
diff --git a/block/blk.h b/block/blk.h
index 3c63ec0f1721..dfeebbb55a42 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -86,6 +86,24 @@ struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs,
 		gfp_t gfp_mask);
 void bvec_free(mempool_t *pool, struct bio_vec *bv, unsigned short nr_vecs);
 
+/* Number of DMA segments required to transfer @bytes data. */
+static inline unsigned int blk_segments(const struct queue_limits *limits,
+					unsigned int bytes)
+{
+	if (!blk_queue_sub_page_limits(limits))
+		return 1;
+
+	{
+		const unsigned int mss = limits->max_segment_size;
+
+		if (bytes <= mss)
+			return 1;
+		if (is_power_of_2(mss))
+			return round_up(bytes, mss) >> ilog2(mss);
+		return (bytes + mss - 1) / mss;
+	}
+}
+
 static inline bool biovec_phys_mergeable(struct request_queue *q,
 		struct bio_vec *vec1, struct bio_vec *vec2)
 {

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 6/9] block: Add support for filesystem requests and small segments
  2023-05-22 22:25 [PATCH v5 0/9] Support limits below the page size Bart Van Assche
                   ` (4 preceding siblings ...)
  2023-05-22 22:25 ` [PATCH v5 5/9] block: Support submitting passthrough requests with small segments Bart Van Assche
@ 2023-05-22 22:25 ` Bart Van Assche
  2023-05-22 22:25 ` [PATCH v5 7/9] block: Add support for small segments in blk_rq_map_user_iov() Bart Van Assche
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: Bart Van Assche @ 2023-05-22 22:25 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, jyescas, mcgrof, Bart Van Assche,
	Ming Lei, Keith Busch

Add support in the bio splitting code and also in the bio submission code
for bios with segments smaller than the page size.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Keith Busch <kbusch@kernel.org>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/blk-merge.c |  8 ++++++--
 block/blk-mq.c    |  2 ++
 block/blk.h       | 11 +++++------
 3 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 65e75efa9bd3..0b28f6df07bc 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -294,7 +294,8 @@ struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
 		if (nsegs < lim->max_segments &&
 		    bytes + bv.bv_len <= max_bytes &&
 		    bv.bv_offset + bv.bv_len <= PAGE_SIZE) {
-			nsegs++;
+			/* single-page bvec optimization */
+			nsegs += blk_segments(lim, bv.bv_len);
 			bytes += bv.bv_len;
 		} else {
 			if (bvec_split_segs(lim, &bv, &nsegs, &bytes,
@@ -544,7 +545,10 @@ static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio,
 			    __blk_segment_map_sg_merge(q, &bvec, &bvprv, sg))
 				goto next_bvec;
 
-			if (bvec.bv_offset + bvec.bv_len <= PAGE_SIZE)
+			if (bvec.bv_offset + bvec.bv_len <= PAGE_SIZE &&
+			    (!blk_queue_sub_page_limits(&q->limits) ||
+			     bvec.bv_len <= q->limits.max_segment_size))
+				/* single-segment bvec optimization */
 				nsegs += __blk_bvec_map_sg(bvec, sglist, sg);
 			else
 				nsegs += blk_bvec_map_sg(q, &bvec, sglist, sg);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 551e7760f45e..3a2dd49b8186 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2932,6 +2932,8 @@ void blk_mq_submit_bio(struct bio *bio)
 		bio = __bio_split_to_limits(bio, &q->limits, &nr_segs);
 		if (!bio)
 			return;
+	} else if (bio->bi_vcnt == 1) {
+		nr_segs = blk_segments(&q->limits, bio->bi_io_vec[0].bv_len);
 	}
 
 	if (!bio_integrity_prep(bio))
diff --git a/block/blk.h b/block/blk.h
index dfeebbb55a42..6e5f86ed3cbc 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -332,13 +332,12 @@ static inline bool bio_may_exceed_limits(struct bio *bio,
 	}
 
 	/*
-	 * All drivers must accept single-segments bios that are <= PAGE_SIZE.
-	 * This is a quick and dirty check that relies on the fact that
-	 * bi_io_vec[0] is always valid if a bio has data.  The check might
-	 * lead to occasional false negatives when bios are cloned, but compared
-	 * to the performance impact of cloned bios themselves the loop below
-	 * doesn't matter anyway.
+	 * Check whether bio splitting should be performed. This check may
+	 * trigger the bio splitting code even if splitting is not necessary.
 	 */
+	if (blk_queue_sub_page_limits(lim) && bio->bi_io_vec &&
+	    bio->bi_io_vec->bv_len > lim->max_segment_size)
+		return true;
 	return lim->chunk_sectors || bio->bi_vcnt != 1 ||
 		bio->bi_io_vec->bv_len + bio->bi_io_vec->bv_offset > PAGE_SIZE;
 }

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 7/9] block: Add support for small segments in blk_rq_map_user_iov()
  2023-05-22 22:25 [PATCH v5 0/9] Support limits below the page size Bart Van Assche
                   ` (5 preceding siblings ...)
  2023-05-22 22:25 ` [PATCH v5 6/9] block: Add support for filesystem requests and " Bart Van Assche
@ 2023-05-22 22:25 ` Bart Van Assche
  2023-05-22 22:25 ` [PATCH v5 8/9] scsi_debug: Support configuring the maximum segment size Bart Van Assche
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: Bart Van Assche @ 2023-05-22 22:25 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, jyescas, mcgrof, Bart Van Assche,
	Ming Lei, Keith Busch

Before changing the return value of bio_add_hw_page() into a value in
the range [0, len], make blk_rq_map_user_iov() fall back to copying data
if mapping the data is not possible due to the segment limit.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Keith Busch <kbusch@kernel.org>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/blk-map.c | 27 ++++++++++++++++++++++-----
 1 file changed, 22 insertions(+), 5 deletions(-)

diff --git a/block/blk-map.c b/block/blk-map.c
index e1355331019a..c04f1698672a 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -308,17 +308,26 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
 		else {
 			for (j = 0; j < npages; j++) {
 				struct page *page = pages[j];
-				unsigned int n = PAGE_SIZE - offs;
+				unsigned int n = PAGE_SIZE - offs, added;
 				bool same_page = false;
 
 				if (n > bytes)
 					n = bytes;
 
-				if (!bio_add_hw_page(rq->q, bio, page, n, offs,
-						     max_sectors, &same_page)) {
+				added = bio_add_hw_page(rq->q, bio, page, n,
+						offs, max_sectors, &same_page);
+				if (added == 0) {
 					if (same_page)
 						put_page(page);
 					break;
+				} else if (added != n) {
+					/*
+					 * The segment size is smaller than the
+					 * page size and an iov exceeds the
+					 * segment size. Give up.
+					 */
+					ret = -EREMOTEIO;
+					goto out_unmap;
 				}
 
 				bytes -= n;
@@ -658,10 +667,18 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
 
 	i = *iter;
 	do {
-		if (copy)
+		if (copy) {
 			ret = bio_copy_user_iov(rq, map_data, &i, gfp_mask);
-		else
+		} else {
 			ret = bio_map_user_iov(rq, &i, gfp_mask);
+			/*
+			 * Fall back to copying the data if bio_map_user_iov()
+			 * returns -EREMOTEIO.
+			 */
+			if (ret == -EREMOTEIO)
+				ret = bio_copy_user_iov(rq, map_data, &i,
+							gfp_mask);
+		}
 		if (ret)
 			goto unmap_rq;
 		if (!bio)

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 8/9] scsi_debug: Support configuring the maximum segment size
  2023-05-22 22:25 [PATCH v5 0/9] Support limits below the page size Bart Van Assche
                   ` (6 preceding siblings ...)
  2023-05-22 22:25 ` [PATCH v5 7/9] block: Add support for small segments in blk_rq_map_user_iov() Bart Van Assche
@ 2023-05-22 22:25 ` Bart Van Assche
  2023-05-24 20:50   ` Douglas Gilbert
  2023-05-22 22:25 ` [PATCH v5 9/9] null_blk: " Bart Van Assche
  2023-06-09 17:14 ` [PATCH v5 0/9] Support limits below the page size Sandeep Dhavale
  9 siblings, 1 reply; 23+ messages in thread
From: Bart Van Assche @ 2023-05-22 22:25 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, jyescas, mcgrof, Bart Van Assche,
	Doug Gilbert, Martin K . Petersen, James E.J. Bottomley

Add a kernel module parameter for configuring the maximum segment size.
This patch enables testing SCSI support for segments smaller than the
page size.

Cc: Doug Gilbert <dgilbert@interlog.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/scsi/scsi_debug.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
index 8c58128ad32a..e951c622bf64 100644
--- a/drivers/scsi/scsi_debug.c
+++ b/drivers/scsi/scsi_debug.c
@@ -752,6 +752,7 @@ static int sdebug_host_max_queue;	/* per host */
 static int sdebug_lowest_aligned = DEF_LOWEST_ALIGNED;
 static int sdebug_max_luns = DEF_MAX_LUNS;
 static int sdebug_max_queue = SDEBUG_CANQUEUE;	/* per submit queue */
+static unsigned int sdebug_max_segment_size = BLK_MAX_SEGMENT_SIZE;
 static unsigned int sdebug_medium_error_start = OPT_MEDIUM_ERR_ADDR;
 static int sdebug_medium_error_count = OPT_MEDIUM_ERR_NUM;
 static int sdebug_ndelay = DEF_NDELAY;	/* if > 0 then unit is nanoseconds */
@@ -5735,6 +5736,7 @@ module_param_named(lowest_aligned, sdebug_lowest_aligned, int, S_IRUGO);
 module_param_named(lun_format, sdebug_lun_am_i, int, S_IRUGO | S_IWUSR);
 module_param_named(max_luns, sdebug_max_luns, int, S_IRUGO | S_IWUSR);
 module_param_named(max_queue, sdebug_max_queue, int, S_IRUGO | S_IWUSR);
+module_param_named(max_segment_size, sdebug_max_segment_size, uint, S_IRUGO);
 module_param_named(medium_error_count, sdebug_medium_error_count, int,
 		   S_IRUGO | S_IWUSR);
 module_param_named(medium_error_start, sdebug_medium_error_start, int,
@@ -5811,6 +5813,7 @@ MODULE_PARM_DESC(lowest_aligned, "lowest aligned lba (def=0)");
 MODULE_PARM_DESC(lun_format, "LUN format: 0->peripheral (def); 1 --> flat address method");
 MODULE_PARM_DESC(max_luns, "number of LUNs per target to simulate(def=1)");
 MODULE_PARM_DESC(max_queue, "max number of queued commands (1 to max(def))");
+MODULE_PARM_DESC(max_segment_size, "max bytes in a single segment");
 MODULE_PARM_DESC(medium_error_count, "count of sectors to return follow on MEDIUM error");
 MODULE_PARM_DESC(medium_error_start, "starting sector number to return MEDIUM error");
 MODULE_PARM_DESC(ndelay, "response delay in nanoseconds (def=0 -> ignore)");
@@ -7723,6 +7726,7 @@ static int sdebug_driver_probe(struct device *dev)
 
 	sdebug_driver_template.can_queue = sdebug_max_queue;
 	sdebug_driver_template.cmd_per_lun = sdebug_max_queue;
+	sdebug_driver_template.max_segment_size = sdebug_max_segment_size;
 	if (!sdebug_clustering)
 		sdebug_driver_template.dma_boundary = PAGE_SIZE - 1;
 

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 9/9] null_blk: Support configuring the maximum segment size
  2023-05-22 22:25 [PATCH v5 0/9] Support limits below the page size Bart Van Assche
                   ` (7 preceding siblings ...)
  2023-05-22 22:25 ` [PATCH v5 8/9] scsi_debug: Support configuring the maximum segment size Bart Van Assche
@ 2023-05-22 22:25 ` Bart Van Assche
  2023-06-09 17:14 ` [PATCH v5 0/9] Support limits below the page size Sandeep Dhavale
  9 siblings, 0 replies; 23+ messages in thread
From: Bart Van Assche @ 2023-05-22 22:25 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, jyescas, mcgrof, Bart Van Assche,
	Ming Lei, Damien Le Moal, Chaitanya Kulkarni, Johannes Thumshirn,
	Vincent Fu, Christophe JAILLET, Akinobu Mita,
	Shin'ichiro Kawasaki

Add support for configuring the maximum segment size.

Add support for segments smaller than the page size.

This patch enables testing segments smaller than the page size with a
driver that does not call blk_rq_map_sg().

Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/null_blk/main.c     | 19 ++++++++++++++++---
 drivers/block/null_blk/null_blk.h |  1 +
 2 files changed, 17 insertions(+), 3 deletions(-)

diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index b3fedafe301e..9c9098f1bd52 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -157,6 +157,10 @@ static int g_max_sectors;
 module_param_named(max_sectors, g_max_sectors, int, 0444);
 MODULE_PARM_DESC(max_sectors, "Maximum size of a command (in 512B sectors)");
 
+static unsigned int g_max_segment_size = BLK_MAX_SEGMENT_SIZE;
+module_param_named(max_segment_size, g_max_segment_size, int, 0444);
+MODULE_PARM_DESC(max_segment_size, "Maximum size of a segment in bytes");
+
 static unsigned int nr_devices = 1;
 module_param(nr_devices, uint, 0444);
 MODULE_PARM_DESC(nr_devices, "Number of devices to register");
@@ -409,6 +413,7 @@ NULLB_DEVICE_ATTR(home_node, uint, NULL);
 NULLB_DEVICE_ATTR(queue_mode, uint, NULL);
 NULLB_DEVICE_ATTR(blocksize, uint, NULL);
 NULLB_DEVICE_ATTR(max_sectors, uint, NULL);
+NULLB_DEVICE_ATTR(max_segment_size, uint, NULL);
 NULLB_DEVICE_ATTR(irqmode, uint, NULL);
 NULLB_DEVICE_ATTR(hw_queue_depth, uint, NULL);
 NULLB_DEVICE_ATTR(index, uint, NULL);
@@ -550,6 +555,7 @@ static struct configfs_attribute *nullb_device_attrs[] = {
 	&nullb_device_attr_queue_mode,
 	&nullb_device_attr_blocksize,
 	&nullb_device_attr_max_sectors,
+	&nullb_device_attr_max_segment_size,
 	&nullb_device_attr_irqmode,
 	&nullb_device_attr_hw_queue_depth,
 	&nullb_device_attr_index,
@@ -652,7 +658,8 @@ static ssize_t memb_group_features_show(struct config_item *item, char *page)
 	return snprintf(page, PAGE_SIZE,
 			"badblocks,blocking,blocksize,cache_size,"
 			"completion_nsec,discard,home_node,hw_queue_depth,"
-			"irqmode,max_sectors,mbps,memory_backed,no_sched,"
+			"irqmode,max_sectors,max_segment_size,mbps,"
+			"memory_backed,no_sched,"
 			"poll_queues,power,queue_mode,shared_tag_bitmap,size,"
 			"submit_queues,use_per_node_hctx,virt_boundary,zoned,"
 			"zone_capacity,zone_max_active,zone_max_open,"
@@ -722,6 +729,7 @@ static struct nullb_device *null_alloc_dev(void)
 	dev->queue_mode = g_queue_mode;
 	dev->blocksize = g_bs;
 	dev->max_sectors = g_max_sectors;
+	dev->max_segment_size = g_max_segment_size;
 	dev->irqmode = g_irqmode;
 	dev->hw_queue_depth = g_hw_queue_depth;
 	dev->blocking = g_blocking;
@@ -1248,6 +1256,8 @@ static int null_transfer(struct nullb *nullb, struct page *page,
 	unsigned int valid_len = len;
 	int err = 0;
 
+	WARN_ONCE(len > dev->max_segment_size, "%u > %u\n", len,
+		  dev->max_segment_size);
 	if (!is_write) {
 		if (dev->zoned)
 			valid_len = null_zone_valid_read_len(nullb,
@@ -1283,7 +1293,8 @@ static int null_handle_rq(struct nullb_cmd *cmd)
 
 	spin_lock_irq(&nullb->lock);
 	rq_for_each_segment(bvec, rq, iter) {
-		len = bvec.bv_len;
+		len = min(bvec.bv_len, nullb->dev->max_segment_size);
+		bvec.bv_len = len;
 		err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset,
 				     op_is_write(req_op(rq)), sector,
 				     rq->cmd_flags & REQ_FUA);
@@ -1310,7 +1321,8 @@ static int null_handle_bio(struct nullb_cmd *cmd)
 
 	spin_lock_irq(&nullb->lock);
 	bio_for_each_segment(bvec, bio, iter) {
-		len = bvec.bv_len;
+		len = min(bvec.bv_len, nullb->dev->max_segment_size);
+		bvec.bv_len = len;
 		err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset,
 				     op_is_write(bio_op(bio)), sector,
 				     bio->bi_opf & REQ_FUA);
@@ -2161,6 +2173,7 @@ static int null_add_dev(struct nullb_device *dev)
 		dev->max_sectors = queue_max_hw_sectors(nullb->q);
 	dev->max_sectors = min(dev->max_sectors, BLK_DEF_MAX_SECTORS);
 	blk_queue_max_hw_sectors(nullb->q, dev->max_sectors);
+	blk_queue_max_segment_size(nullb->q, dev->max_segment_size);
 
 	if (dev->virt_boundary)
 		blk_queue_virt_boundary(nullb->q, PAGE_SIZE - 1);
diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h
index 929f659dd255..7bf80b0035f5 100644
--- a/drivers/block/null_blk/null_blk.h
+++ b/drivers/block/null_blk/null_blk.h
@@ -107,6 +107,7 @@ struct nullb_device {
 	unsigned int queue_mode; /* block interface */
 	unsigned int blocksize; /* block size */
 	unsigned int max_sectors; /* Max sectors per command */
+	unsigned int max_segment_size; /* Max size of a single DMA segment. */
 	unsigned int irqmode; /* IRQ completion handler */
 	unsigned int hw_queue_depth; /* queue depth */
 	unsigned int index; /* index of the disk, only valid with a disk */

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 1/9] block: Use pr_info() instead of printk(KERN_INFO ...)
  2023-05-22 22:25 ` [PATCH v5 1/9] block: Use pr_info() instead of printk(KERN_INFO ...) Bart Van Assche
@ 2023-05-22 23:10   ` Luis Chamberlain
  2023-05-27 16:09     ` Bart Van Assche
  0 siblings, 1 reply; 23+ messages in thread
From: Luis Chamberlain @ 2023-05-22 23:10 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, jyescas, Ming Lei,
	Keith Busch

On Mon, May 22, 2023 at 03:25:33PM -0700, Bart Van Assche wrote:
> Switch to the modern style of printing kernel messages. Use %u instead
> of %d to print unsigned integers.
> 
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Ming Lei <ming.lei@redhat.com>
> Cc: Keith Busch <kbusch@kernel.org>
> Cc: Luis Chamberlain <mcgrof@kernel.org>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>  block/blk-settings.c | 12 ++++--------
>  1 file changed, 4 insertions(+), 8 deletions(-)
> 
> diff --git a/block/blk-settings.c b/block/blk-settings.c
> index 896b4654ab00..1d8d2ae7bdf4 100644
> --- a/block/blk-settings.c
> +++ b/block/blk-settings.c
> @@ -127,8 +127,7 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto
>  
>  	if ((max_hw_sectors << 9) < PAGE_SIZE) {
>  		max_hw_sectors = 1 << (PAGE_SHIFT - 9);
> -		printk(KERN_INFO "%s: set to minimum %d\n",
> -		       __func__, max_hw_sectors);
> +		pr_info("%s: set to minimum %u\n", __func__, max_hw_sectors);

You may want to then also add at the very top of the file before any
includes something like:

#define pr_fmt(fmt) "blk-settings: " fmt

You can see the other defines of pr_fmt on block/*.c

Other than that:

Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>

  Luis

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 2/9] block: Prepare for supporting sub-page limits
  2023-05-22 22:25 ` [PATCH v5 2/9] block: Prepare for supporting sub-page limits Bart Van Assche
@ 2023-05-22 23:26   ` Luis Chamberlain
  0 siblings, 0 replies; 23+ messages in thread
From: Luis Chamberlain @ 2023-05-22 23:26 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, jyescas, Ming Lei,
	Keith Busch

On Mon, May 22, 2023 at 03:25:34PM -0700, Bart Van Assche wrote:
> Introduce variables that represent the lower configuration bounds. This
> patch does not change any functionality.

Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>

  Luis

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 8/9] scsi_debug: Support configuring the maximum segment size
  2023-05-22 22:25 ` [PATCH v5 8/9] scsi_debug: Support configuring the maximum segment size Bart Van Assche
@ 2023-05-24 20:50   ` Douglas Gilbert
  0 siblings, 0 replies; 23+ messages in thread
From: Douglas Gilbert @ 2023-05-24 20:50 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: linux-block, Christoph Hellwig, jyescas, mcgrof,
	Martin K . Petersen, James E.J. Bottomley

On 2023-05-22 18:25, Bart Van Assche wrote:
> Add a kernel module parameter for configuring the maximum segment size.
> This patch enables testing SCSI support for segments smaller than the
> page size.
> 
> Cc: Doug Gilbert <dgilbert@interlog.com>
> Cc: Martin K. Petersen <martin.petersen@oracle.com>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Looks good.

Acked-by: Douglas Gilbert <dgilbert@interlog.com>

> ---
>   drivers/scsi/scsi_debug.c | 4 ++++
>   1 file changed, 4 insertions(+)
> 
> diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
> index 8c58128ad32a..e951c622bf64 100644
> --- a/drivers/scsi/scsi_debug.c
> +++ b/drivers/scsi/scsi_debug.c
> @@ -752,6 +752,7 @@ static int sdebug_host_max_queue;	/* per host */
>   static int sdebug_lowest_aligned = DEF_LOWEST_ALIGNED;
>   static int sdebug_max_luns = DEF_MAX_LUNS;
>   static int sdebug_max_queue = SDEBUG_CANQUEUE;	/* per submit queue */
> +static unsigned int sdebug_max_segment_size = BLK_MAX_SEGMENT_SIZE;
>   static unsigned int sdebug_medium_error_start = OPT_MEDIUM_ERR_ADDR;
>   static int sdebug_medium_error_count = OPT_MEDIUM_ERR_NUM;
>   static int sdebug_ndelay = DEF_NDELAY;	/* if > 0 then unit is nanoseconds */
> @@ -5735,6 +5736,7 @@ module_param_named(lowest_aligned, sdebug_lowest_aligned, int, S_IRUGO);
>   module_param_named(lun_format, sdebug_lun_am_i, int, S_IRUGO | S_IWUSR);
>   module_param_named(max_luns, sdebug_max_luns, int, S_IRUGO | S_IWUSR);
>   module_param_named(max_queue, sdebug_max_queue, int, S_IRUGO | S_IWUSR);
> +module_param_named(max_segment_size, sdebug_max_segment_size, uint, S_IRUGO);
>   module_param_named(medium_error_count, sdebug_medium_error_count, int,
>   		   S_IRUGO | S_IWUSR);
>   module_param_named(medium_error_start, sdebug_medium_error_start, int,
> @@ -5811,6 +5813,7 @@ MODULE_PARM_DESC(lowest_aligned, "lowest aligned lba (def=0)");
>   MODULE_PARM_DESC(lun_format, "LUN format: 0->peripheral (def); 1 --> flat address method");
>   MODULE_PARM_DESC(max_luns, "number of LUNs per target to simulate(def=1)");
>   MODULE_PARM_DESC(max_queue, "max number of queued commands (1 to max(def))");
> +MODULE_PARM_DESC(max_segment_size, "max bytes in a single segment");
>   MODULE_PARM_DESC(medium_error_count, "count of sectors to return follow on MEDIUM error");
>   MODULE_PARM_DESC(medium_error_start, "starting sector number to return MEDIUM error");
>   MODULE_PARM_DESC(ndelay, "response delay in nanoseconds (def=0 -> ignore)");
> @@ -7723,6 +7726,7 @@ static int sdebug_driver_probe(struct device *dev)
>   
>   	sdebug_driver_template.can_queue = sdebug_max_queue;
>   	sdebug_driver_template.cmd_per_lun = sdebug_max_queue;
> +	sdebug_driver_template.max_segment_size = sdebug_max_segment_size;
>   	if (!sdebug_clustering)
>   		sdebug_driver_template.dma_boundary = PAGE_SIZE - 1;
>   


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 3/9] block: Support configuring limits below the page size
  2023-05-22 22:25 ` [PATCH v5 3/9] block: Support configuring limits below the page size Bart Van Assche
@ 2023-05-27  3:16   ` Luis Chamberlain
  2023-05-27 16:20     ` Bart Van Assche
  0 siblings, 1 reply; 23+ messages in thread
From: Luis Chamberlain @ 2023-05-27  3:16 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, jyescas, Ming Lei,
	Keith Busch

On Mon, May 22, 2023 at 03:25:35PM -0700, Bart Van Assche wrote:
> Allow block drivers to configure the following:
> * Maximum number of hardware sectors values smaller than
>   PAGE_SIZE >> SECTOR_SHIFT. For PAGE_SIZE = 4096 this means that values
>   below 8 become supported.
> * A maximum segment size below the page size. This is most useful
>   for page sizes above 4096 bytes.
> 
> The blk_sub_page_segments static branch will be used in later patches to
> prevent that performance of block drivers that support segments >=
> PAGE_SIZE and max_hw_sectors >= PAGE_SIZE >> SECTOR_SHIFT would be affected.
> 
> This patch may change the behavior of existing block drivers from not
> working into working.

That's quite an understatement.

> If a block driver calls
> blk_queue_max_hw_sectors() or blk_queue_max_segment_size(), this is
> usually done to configure the maximum supported limits. An attempt to
> configure a limit below what is supported by the block layer causes the
> block layer to select a larger value. If that value is not supported by
> the block driver, this may cause other data to be transferred than
> requested, a kernel crash or other undesirable behavior.

Right, which in the worst case could expose a firmware bug or whatever.

So I see this as a critical fix too. And it gets me wondering what has
happened for 512 byte controllers on 4K PAGE_SIZE?

> + * blk_enable_sub_page_limits - enable support for max_segment_size values smaller than PAGE_SIZE and for max_hw_sectors values below PAGE_SIZE >> SECTOR_SHIFT
Length, 100 is an exception. I'm sticking to the old school 80.

> + * blk_disable_sub_page_limits - disable support for max_segment_size values smaller than PAGE_SIZE and for max_hw_sectors values below PAGE_SIZE >> SECTOR_SHIFT

Same.

> @@ -126,6 +173,11 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto
>  	unsigned int min_max_hw_sectors = PAGE_SIZE >> SECTOR_SHIFT;
>  	unsigned int max_sectors;
>  
> +	if (max_hw_sectors < min_max_hw_sectors) {
> +		blk_enable_sub_page_limits(limits);
> +		min_max_hw_sectors = 1;
> +	}
> +
>  	if (max_hw_sectors < min_max_hw_sectors) {
>  		max_hw_sectors = min_max_hw_sectors;
>  		pr_info("%s: set to minimum %u\n", __func__, max_hw_sectors);

It would seem like max_dev_sectors would have saved the day too,
but that is said to be set by the "disk" on the documentation.
I see scsi/sd.c and drivers/s390/block/dasd_*.c set this too,
is that a layering violation, or was that to help perhaps with
similar problems? If not could stroage controller have used this
for this issue as well?

Could the documentation for blk_queue_max_hw_sectors() be enhanced to
clarify this?

The way I'm thinking about this is, if this is a fix for stable too,
what would a minimum safe stable fix be like? And then after whatever
we need to make it better (non stable fixes).

  Luis

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 4/9] block: Make sub_page_limit_queues available in debugfs
  2023-05-22 22:25 ` [PATCH v5 4/9] block: Make sub_page_limit_queues available in debugfs Bart Van Assche
@ 2023-05-27  3:17   ` Luis Chamberlain
  0 siblings, 0 replies; 23+ messages in thread
From: Luis Chamberlain @ 2023-05-27  3:17 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, jyescas, Ming Lei,
	Keith Busch

On Mon, May 22, 2023 at 03:25:36PM -0700, Bart Van Assche wrote:
> This new debugfs attribute makes it easier to verify the code that tracks
> how many queues require limits below the page size.
> 
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Ming Lei <ming.lei@redhat.com>
> Cc: Keith Busch <kbusch@kernel.org>
> Cc: Luis Chamberlain <mcgrof@kernel.org>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>

  Luis

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 1/9] block: Use pr_info() instead of printk(KERN_INFO ...)
  2023-05-22 23:10   ` Luis Chamberlain
@ 2023-05-27 16:09     ` Bart Van Assche
  0 siblings, 0 replies; 23+ messages in thread
From: Bart Van Assche @ 2023-05-27 16:09 UTC (permalink / raw)
  To: Luis Chamberlain
  Cc: Jens Axboe, linux-block, Christoph Hellwig, jyescas, Ming Lei,
	Keith Busch

On 5/22/23 16:10, Luis Chamberlain wrote:
> On Mon, May 22, 2023 at 03:25:33PM -0700, Bart Van Assche wrote:
>> Switch to the modern style of printing kernel messages. Use %u instead
>> of %d to print unsigned integers.
>>
>> Cc: Christoph Hellwig <hch@lst.de>
>> Cc: Ming Lei <ming.lei@redhat.com>
>> Cc: Keith Busch <kbusch@kernel.org>
>> Cc: Luis Chamberlain <mcgrof@kernel.org>
>> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
>> ---
>>   block/blk-settings.c | 12 ++++--------
>>   1 file changed, 4 insertions(+), 8 deletions(-)
>>
>> diff --git a/block/blk-settings.c b/block/blk-settings.c
>> index 896b4654ab00..1d8d2ae7bdf4 100644
>> --- a/block/blk-settings.c
>> +++ b/block/blk-settings.c
>> @@ -127,8 +127,7 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto
>>   
>>   	if ((max_hw_sectors << 9) < PAGE_SIZE) {
>>   		max_hw_sectors = 1 << (PAGE_SHIFT - 9);
>> -		printk(KERN_INFO "%s: set to minimum %d\n",
>> -		       __func__, max_hw_sectors);
>> +		pr_info("%s: set to minimum %u\n", __func__, max_hw_sectors);
> 
> You may want to then also add at the very top of the file before any
> includes something like:
> 
> #define pr_fmt(fmt) "blk-settings: " fmt
> 
> You can see the other defines of pr_fmt on block/*.c

My goal with this patch is *not* to modify the output so I prefer not to 
define the pr_fmt() macro in this file.

Thanks,

Bart.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 3/9] block: Support configuring limits below the page size
  2023-05-27  3:16   ` Luis Chamberlain
@ 2023-05-27 16:20     ` Bart Van Assche
  2023-05-28 20:33       ` Luis Chamberlain
  0 siblings, 1 reply; 23+ messages in thread
From: Bart Van Assche @ 2023-05-27 16:20 UTC (permalink / raw)
  To: Luis Chamberlain
  Cc: Jens Axboe, linux-block, Christoph Hellwig, jyescas, Ming Lei,
	Keith Busch

On 5/26/23 20:16, Luis Chamberlain wrote:
> So I see this as a critical fix too. And it gets me wondering what has
> happened for 512 byte controllers on 4K PAGE_SIZE?

What is a "512 byte controller"? Most storage controllers I'm familiar 
with support DMA segments well above 4 KiB.

>> @@ -126,6 +173,11 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto
>>   	unsigned int min_max_hw_sectors = PAGE_SIZE >> SECTOR_SHIFT;
>>   	unsigned int max_sectors;
>>   
>> +	if (max_hw_sectors < min_max_hw_sectors) {
>> +		blk_enable_sub_page_limits(limits);
>> +		min_max_hw_sectors = 1;
>> +	}
>> +
>>   	if (max_hw_sectors < min_max_hw_sectors) {
>>   		max_hw_sectors = min_max_hw_sectors;
>>   		pr_info("%s: set to minimum %u\n", __func__, max_hw_sectors);
> 
> It would seem like max_dev_sectors would have saved the day too,
> but that is said to be set by the "disk" on the documentation.
> I see scsi/sd.c and drivers/s390/block/dasd_*.c set this too,
> is that a layering violation, or was that to help perhaps with
> similar problems? If not could stroage controller have used this
> for this issue as well?

min_not_zero(max_hw_sectors, max_dev_sectors) is the maximum transfer 
size used by the block layer. max_hw_sectors typically represents the 
transfer limit of the DMA controller and max_dev_sectors typically 
represents the transfer limit of the storage device. If e.g. a RAID 
controller exists between the host and the storage devices these limits 
can be different.

> Could the documentation for blk_queue_max_hw_sectors() be enhanced to
> clarify this?

I will look into this.

> The way I'm thinking about this is, if this is a fix for stable too,
> what would a minimum safe stable fix be like? And then after whatever
> we need to make it better (non stable fixes).

Hmm ... doesn't the "upstream first" rule apply to stable kernels? 
Shouldn't only patches land in stable kernels that are already upstream 
instead of applying different patches on stable kernels?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 3/9] block: Support configuring limits below the page size
  2023-05-27 16:20     ` Bart Van Assche
@ 2023-05-28 20:33       ` Luis Chamberlain
  2023-05-28 22:32         ` Bart Van Assche
  0 siblings, 1 reply; 23+ messages in thread
From: Luis Chamberlain @ 2023-05-28 20:33 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, jyescas, Ming Lei,
	Keith Busch

On Sat, May 27, 2023 at 09:20:30AM -0700, Bart Van Assche wrote:
> On 5/26/23 20:16, Luis Chamberlain wrote:
> > So I see this as a critical fix too. And it gets me wondering what has
> > happened for 512 byte controllers on 4K PAGE_SIZE?
> 
> What is a "512 byte controller"? Most storage controllers I'm familiar with
> support DMA segments well above 4 KiB.

Nevermind, there shouldn't be controllers as such.

> I will look into this.
> 
> > The way I'm thinking about this is, if this is a fix for stable too,
> > what would a minimum safe stable fix be like? And then after whatever
> > we need to make it better (non stable fixes).
> 
> Hmm ... doesn't the "upstream first" rule apply to stable kernels?
> Shouldn't only patches land in stable kernels that are already
> upstream instead of applying different patches on stable kernels?

That's right, so the question is, is there a way to make simpler
modifications which might be sensible for this situation for stable
first, and then enhancements which don't go to stable on top ?

What would the minimum fix look like for stable?

  Luis

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 3/9] block: Support configuring limits below the page size
  2023-05-28 20:33       ` Luis Chamberlain
@ 2023-05-28 22:32         ` Bart Van Assche
  2023-05-31  5:40           ` Luis Chamberlain
  0 siblings, 1 reply; 23+ messages in thread
From: Bart Van Assche @ 2023-05-28 22:32 UTC (permalink / raw)
  To: Luis Chamberlain
  Cc: Jens Axboe, linux-block, Christoph Hellwig, jyescas, Ming Lei,
	Keith Busch

On 5/28/23 13:33, Luis Chamberlain wrote:
> That's right, so the question is, is there a way to make simpler
> modifications which might be sensible for this situation for stable
> first, and then enhancements which don't go to stable on top ?
> 
> What would the minimum fix look like for stable?

Let's follow the usual process: work integrating these changes in the
upstream kernel first and deal with backporting these changes to stable
kernels after agreement has been achieved about what the upstream
patches should look like.

Thanks,

Bart.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 3/9] block: Support configuring limits below the page size
  2023-05-28 22:32         ` Bart Van Assche
@ 2023-05-31  5:40           ` Luis Chamberlain
  0 siblings, 0 replies; 23+ messages in thread
From: Luis Chamberlain @ 2023-05-31  5:40 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, jyescas, Ming Lei,
	Keith Busch

On Sun, May 28, 2023 at 03:32:45PM -0700, Bart Van Assche wrote:
> On 5/28/23 13:33, Luis Chamberlain wrote:
> > That's right, so the question is, is there a way to make simpler
> > modifications which might be sensible for this situation for stable
> > first, and then enhancements which don't go to stable on top ?
> > 
> > What would the minimum fix look like for stable?
> 
> Let's follow the usual process: work integrating these changes in the
> upstream kernel first and deal with backporting these changes to stable
> kernels after agreement has been achieved about what the upstream
> patches should look like.

I'm not talking about deviating away from that process.

Sometimes backporting a large chunk of changes cannot be done due the
size or other things. So one way to address this proactively is
to see if perhaps some compromises can be made for stable, while
upstream carries the rest.

I can't see a way to do this though...

So at least as-is, for the rest of the patches in this series:

Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>

  Luis

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 0/9] Support limits below the page size
  2023-05-22 22:25 [PATCH v5 0/9] Support limits below the page size Bart Van Assche
                   ` (8 preceding siblings ...)
  2023-05-22 22:25 ` [PATCH v5 9/9] null_blk: " Bart Van Assche
@ 2023-06-09 17:14 ` Sandeep Dhavale
  2023-06-12 18:15   ` Bart Van Assche
  9 siblings, 1 reply; 23+ messages in thread
From: Sandeep Dhavale @ 2023-06-09 17:14 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, jyescas, mcgrof

On Mon, May 22, 2023 at 3:25 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> Hi Jens,
>
> We want to improve Android performance by increasing the page size from 4 KiB
> to 16 KiB. However, some of the storage controllers we care about do not support
> DMA segments larger than 4 KiB. Hence the need support for DMA segments that are
> smaller than the size of one virtual memory page. This patch series implements
> that support. Please consider this patch series for the next merge window.
>
> Thanks,
>
> Bart.
>
> Changes compared to v4:
> - Fixed the debugfs patch such that the behavior for creating the block
>   debugfs directory is retained.
> - Made the description of patch "Support configuring limits below the page
>   size" more detailed. Split that patch into two patches.
> - Added patch "Use pr_info() instead of printk(KERN_INFO ...)".
>
> Changes compared to v3:
> - Removed CONFIG_BLK_SUB_PAGE_SEGMENTS and QUEUE_FLAG_SUB_PAGE_SEGMENTS.
>   Replaced these by a new member in struct queue_limits and a static branch.
> - The static branch that controls whether or not sub-page limits are enabled
>   is set by the block layer core instead of by block drivers.
> - Dropped the patches that are no longer needed (SCSI core and UFS Exynos
>   driver).
>
> Changes compared to v2:
> - For SCSI drivers, only set flag QUEUE_FLAG_SUB_PAGE_SEGMENTS if necessary.
> - In the scsi_debug patch, sorted kernel module parameters alphabetically.
>   Only set flag QUEUE_FLAG_SUB_PAGE_SEGMENTS if necessary.
> - Added a patch for the UFS Exynos driver that enables
>   CONFIG_BLK_SUB_PAGE_SEGMENTS if the page size exceeds 4 KiB.
>
> Changes compared to v1:
> - Added a CONFIG variable that controls whether or not small segment support
>   is enabled.
> - Improved patch descriptions.
>
> Bart Van Assche (9):
>   block: Use pr_info() instead of printk(KERN_INFO ...)
>   block: Prepare for supporting sub-page limits
>   block: Support configuring limits below the page size
>   block: Make sub_page_limit_queues available in debugfs
>   block: Support submitting passthrough requests with small segments
>   block: Add support for filesystem requests and small segments
>   block: Add support for small segments in blk_rq_map_user_iov()
>   scsi_debug: Support configuring the maximum segment size
>   null_blk: Support configuring the maximum segment size
>
>  block/blk-core.c                  |  4 ++
>  block/blk-map.c                   | 29 +++++++---
>  block/blk-merge.c                 |  8 ++-
>  block/blk-mq-debugfs.c            |  9 ++++
>  block/blk-mq-debugfs.h            |  6 +++
>  block/blk-mq.c                    |  2 +
>  block/blk-settings.c              | 88 ++++++++++++++++++++++++++-----
>  block/blk.h                       | 39 +++++++++++---
>  drivers/block/null_blk/main.c     | 19 +++++--
>  drivers/block/null_blk/null_blk.h |  1 +
>  drivers/scsi/scsi_debug.c         |  4 ++
>  include/linux/blkdev.h            |  2 +
>  12 files changed, 182 insertions(+), 29 deletions(-)
>
>
We have tested this series on Pixel 6 by applying to android common
kernel at [0] successfully with 16K page size.

Feel free to add
Tested-by: Sandeep Dhavale <dhavale@google.com>

-Sandeep.

[0] https://android.googlesource.com/kernel/common/+/refs/heads/android14-6.1

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 0/9] Support limits below the page size
  2023-06-09 17:14 ` [PATCH v5 0/9] Support limits below the page size Sandeep Dhavale
@ 2023-06-12 18:15   ` Bart Van Assche
  2023-06-12 18:34     ` Sandeep Dhavale
  0 siblings, 1 reply; 23+ messages in thread
From: Bart Van Assche @ 2023-06-12 18:15 UTC (permalink / raw)
  To: Sandeep Dhavale
  Cc: Jens Axboe, linux-block, Christoph Hellwig, jyescas, mcgrof

On 6/9/23 10:14, Sandeep Dhavale wrote:
> We have tested this series on Pixel 6 by applying to android common
> kernel at [0] successfully with 16K page size.
> 
> Feel free to add
> Tested-by: Sandeep Dhavale <dhavale@google.com>

Thanks Sandeep for the testing. I assume that the Tested-by tag applies
to patches 1, 2, 3 and 6 of this series?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 0/9] Support limits below the page size
  2023-06-12 18:15   ` Bart Van Assche
@ 2023-06-12 18:34     ` Sandeep Dhavale
  0 siblings, 0 replies; 23+ messages in thread
From: Sandeep Dhavale @ 2023-06-12 18:34 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, jyescas, mcgrof

On Mon, Jun 12, 2023 at 11:15 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 6/9/23 10:14, Sandeep Dhavale wrote:
> > We have tested this series on Pixel 6 by applying to android common
> > kernel at [0] successfully with 16K page size.
> >
> > Feel free to add
> > Tested-by: Sandeep Dhavale <dhavale@google.com>
>
> Thanks Sandeep for the testing. I assume that the Tested-by tag applies
> to patches 1, 2, 3 and 6 of this series?
>
That is correct Bart, those were the relevant patches for our testing
with 16KB page size. Sorry, I should have been more clear.

Thanks,
Sandeep.

> Thanks,
>
> Bart.

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2023-06-12 18:35 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-22 22:25 [PATCH v5 0/9] Support limits below the page size Bart Van Assche
2023-05-22 22:25 ` [PATCH v5 1/9] block: Use pr_info() instead of printk(KERN_INFO ...) Bart Van Assche
2023-05-22 23:10   ` Luis Chamberlain
2023-05-27 16:09     ` Bart Van Assche
2023-05-22 22:25 ` [PATCH v5 2/9] block: Prepare for supporting sub-page limits Bart Van Assche
2023-05-22 23:26   ` Luis Chamberlain
2023-05-22 22:25 ` [PATCH v5 3/9] block: Support configuring limits below the page size Bart Van Assche
2023-05-27  3:16   ` Luis Chamberlain
2023-05-27 16:20     ` Bart Van Assche
2023-05-28 20:33       ` Luis Chamberlain
2023-05-28 22:32         ` Bart Van Assche
2023-05-31  5:40           ` Luis Chamberlain
2023-05-22 22:25 ` [PATCH v5 4/9] block: Make sub_page_limit_queues available in debugfs Bart Van Assche
2023-05-27  3:17   ` Luis Chamberlain
2023-05-22 22:25 ` [PATCH v5 5/9] block: Support submitting passthrough requests with small segments Bart Van Assche
2023-05-22 22:25 ` [PATCH v5 6/9] block: Add support for filesystem requests and " Bart Van Assche
2023-05-22 22:25 ` [PATCH v5 7/9] block: Add support for small segments in blk_rq_map_user_iov() Bart Van Assche
2023-05-22 22:25 ` [PATCH v5 8/9] scsi_debug: Support configuring the maximum segment size Bart Van Assche
2023-05-24 20:50   ` Douglas Gilbert
2023-05-22 22:25 ` [PATCH v5 9/9] null_blk: " Bart Van Assche
2023-06-09 17:14 ` [PATCH v5 0/9] Support limits below the page size Sandeep Dhavale
2023-06-12 18:15   ` Bart Van Assche
2023-06-12 18:34     ` Sandeep Dhavale

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.