linux-mmc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/6] nvmem: add block device NVMEM provider
@ 2023-07-19 22:01 Daniel Golle
  2023-07-19 22:02 ` [RFC PATCH 1/6] mmc: core: set card fwnode_handle Daniel Golle
                   ` (8 more replies)
  0 siblings, 9 replies; 34+ messages in thread
From: Daniel Golle @ 2023-07-19 22:01 UTC (permalink / raw)
  To: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Daniel Golle, Damien Le Moal,
	Ming Lei, Min Li, Christian Loehle, Adrian Hunter,
	Hannes Reinecke, Jack Wang, Florian Fainelli, Yeqi Fu,
	Avri Altman, Hans de Goede, Ye Bin, Greg Kroah-Hartman,
	Rafał Miłecki, linux-block, linux-kernel, linux-mmc,
	linux-mtd

On embedded devices using an eMMC it is common that one or more (hw/sw)
partitions on the eMMC are used to store MAC addresses and Wi-Fi
calibration EEPROM data.

Implement an NVMEM provider backed by block devices as typically the
NVMEM framework is used to have kernel drivers read and use binary data
from EEPROMs, efuses, flash memory (MTD), ...

In order to be able to reference hardware partitions on an eMMC, add code
to bind each hardware partition to a specific firmware subnode.

This series is meant to open the discussion on how exactly the device tree
schema for block devices and partitions may look like, and even if using
the block layer to back the NVMEM device is at all the way to go -- to me
it seemed to be a good solution because it will be reuable e.g. for NVMe.

Daniel Golle (6):
  mmc: core: set card fwnode_handle
  mmc: block: set fwnode of disk devices
  block: add new genhd flag GENHD_FL_NO_NVMEM
  mtd: blkdevs: set GENHD_FL_NO_NVMEM
  mtd: ubi: block: set GENHD_FL_NO_NVMEM
  block: implement NVMEM provider

 block/Kconfig             |   8 ++
 block/Makefile            |   1 +
 block/blk-nvmem.c         | 187 ++++++++++++++++++++++++++++++++++++++
 block/blk.h               |  13 +++
 block/genhd.c             |   2 +
 block/partitions/core.c   |   2 +
 drivers/mmc/core/block.c  |   8 ++
 drivers/mmc/core/bus.c    |   2 +
 drivers/mtd/mtd_blkdevs.c |   1 +
 drivers/mtd/ubi/block.c   |   2 +-
 include/linux/blkdev.h    |   3 +
 11 files changed, 228 insertions(+), 1 deletion(-)
 create mode 100644 block/blk-nvmem.c

-- 
2.41.0

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [RFC PATCH 1/6] mmc: core: set card fwnode_handle
  2023-07-19 22:01 [RFC PATCH 0/6] nvmem: add block device NVMEM provider Daniel Golle
@ 2023-07-19 22:02 ` Daniel Golle
  2023-07-19 22:02 ` [RFC PATCH 2/6] mmc: block: set fwnode of disk devices Daniel Golle
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Daniel Golle @ 2023-07-19 22:02 UTC (permalink / raw)
  To: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Daniel Golle, Damien Le Moal,
	Ming Lei, Min Li, Christian Loehle, Adrian Hunter,
	Hannes Reinecke, Jack Wang, Florian Fainelli, Yeqi Fu,
	Avri Altman, Hans de Goede, Ye Bin, Greg Kroah-Hartman,
	Rafał Miłecki, linux-block, linux-kernel, linux-mmc,
	linux-mtd

Instead of just populating dev.of_node, also set fwnode in case it
isn't set yet and of_node is present.

Signed-off-by: Daniel Golle <daniel@makrotopia.org>
---
 drivers/mmc/core/bus.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/mmc/core/bus.c b/drivers/mmc/core/bus.c
index 2c3074a605fc4..ecf6e23e4c307 100644
--- a/drivers/mmc/core/bus.c
+++ b/drivers/mmc/core/bus.c
@@ -361,6 +361,8 @@ int mmc_add_card(struct mmc_card *card)
 
 	mmc_add_card_debugfs(card);
 	card->dev.of_node = mmc_of_find_child_device(card->host, 0);
+	if (card->dev.of_node && !card->dev.fwnode)
+		card->dev.fwnode = &card->dev.of_node->fwnode;
 
 	device_enable_async_suspend(&card->dev);
 
-- 
2.41.0

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [RFC PATCH 2/6] mmc: block: set fwnode of disk devices
  2023-07-19 22:01 [RFC PATCH 0/6] nvmem: add block device NVMEM provider Daniel Golle
  2023-07-19 22:02 ` [RFC PATCH 1/6] mmc: core: set card fwnode_handle Daniel Golle
@ 2023-07-19 22:02 ` Daniel Golle
  2023-08-07 13:48   ` Ulf Hansson
  2023-07-19 22:03 ` [RFC PATCH 3/6] block: add new genhd flag GENHD_FL_NO_NVMEM Daniel Golle
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 34+ messages in thread
From: Daniel Golle @ 2023-07-19 22:02 UTC (permalink / raw)
  To: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Daniel Golle, Damien Le Moal,
	Ming Lei, Min Li, Christian Loehle, Adrian Hunter,
	Hannes Reinecke, Jack Wang, Florian Fainelli, Yeqi Fu,
	Avri Altman, Hans de Goede, Ye Bin, Greg Kroah-Hartman,
	Rafał Miłecki, linux-block, linux-kernel, linux-mmc,
	linux-mtd

Set fwnode of disk devices to 'block', 'boot0' and 'boot1' subnodes of
the mmc-card. This is done in preparation for having the eMMC act as
NVMEM provider.

Signed-off-by: Daniel Golle <daniel@makrotopia.org>
---
 drivers/mmc/core/block.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index f701efb1fa785..fc1a9f31bd253 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -2413,6 +2413,8 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
 					      int area_type,
 					      unsigned int part_type)
 {
+	struct fwnode_handle *fwnode;
+	struct device *ddev;
 	struct mmc_blk_data *md;
 	int devidx, ret;
 	char cap_str[10];
@@ -2509,6 +2511,12 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
 
 	blk_queue_write_cache(md->queue.queue, cache_enabled, fua_enabled);
 
+	ddev = disk_to_dev(md->disk);
+	fwnode = device_get_named_child_node(subname ? md->parent->parent :
+						       md->parent,
+					     subname ? subname : "block");
+	ddev->fwnode = fwnode;
+
 	string_get_size((u64)size, 512, STRING_UNITS_2,
 			cap_str, sizeof(cap_str));
 	pr_info("%s: %s %s %s%s\n",
-- 
2.41.0

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [RFC PATCH 3/6] block: add new genhd flag GENHD_FL_NO_NVMEM
  2023-07-19 22:01 [RFC PATCH 0/6] nvmem: add block device NVMEM provider Daniel Golle
  2023-07-19 22:02 ` [RFC PATCH 1/6] mmc: core: set card fwnode_handle Daniel Golle
  2023-07-19 22:02 ` [RFC PATCH 2/6] mmc: block: set fwnode of disk devices Daniel Golle
@ 2023-07-19 22:03 ` Daniel Golle
  2023-07-20  8:24   ` Hannes Reinecke
  2023-07-19 22:03 ` [RFC PATCH 4/6] mtd: blkdevs: set GENHD_FL_NO_NVMEM Daniel Golle
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 34+ messages in thread
From: Daniel Golle @ 2023-07-19 22:03 UTC (permalink / raw)
  To: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Daniel Golle, Damien Le Moal,
	Ming Lei, Min Li, Christian Loehle, Adrian Hunter,
	Hannes Reinecke, Jack Wang, Florian Fainelli, Yeqi Fu,
	Avri Altman, Hans de Goede, Ye Bin, Greg Kroah-Hartman,
	Rafał Miłecki, linux-block, linux-kernel, linux-mmc,
	linux-mtd

Add new flag to destinguish block devices which should not act as an
NVMEM provider, such as for example an emulated block device on top of
an MTD partition which already acts as an NVMEM provider itself.

Signed-off-by: Daniel Golle <daniel@makrotopia.org>
---
 include/linux/blkdev.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 2f5371b8482c0..e853d1815be15 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -80,11 +80,14 @@ struct partition_meta_info {
  * ``GENHD_FL_NO_PART``: partition support is disabled.  The kernel will not
  * scan for partitions from add_disk, and users can't add partitions manually.
  *
+ * ``GENHD_FL_NO_NVMEM``: NVMEM emulation is disabled.  The kernel will not
+ * emulate an NVMEM device on top of this disk.
  */
 enum {
 	GENHD_FL_REMOVABLE			= 1 << 0,
 	GENHD_FL_HIDDEN				= 1 << 1,
 	GENHD_FL_NO_PART			= 1 << 2,
+	GENHD_FL_NO_NVMEM			= 1 << 3,
 };
 
 enum {
-- 
2.41.0

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [RFC PATCH 4/6] mtd: blkdevs: set GENHD_FL_NO_NVMEM
  2023-07-19 22:01 [RFC PATCH 0/6] nvmem: add block device NVMEM provider Daniel Golle
                   ` (2 preceding siblings ...)
  2023-07-19 22:03 ` [RFC PATCH 3/6] block: add new genhd flag GENHD_FL_NO_NVMEM Daniel Golle
@ 2023-07-19 22:03 ` Daniel Golle
  2023-07-20  6:04   ` Miquel Raynal
  2023-07-19 22:03 ` [RFC PATCH 5/6] mtd: ubi: block: " Daniel Golle
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 34+ messages in thread
From: Daniel Golle @ 2023-07-19 22:03 UTC (permalink / raw)
  To: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Daniel Golle, Damien Le Moal,
	Ming Lei, Min Li, Christian Loehle, Adrian Hunter,
	Hannes Reinecke, Jack Wang, Florian Fainelli, Yeqi Fu,
	Avri Altman, Hans de Goede, Ye Bin, Greg Kroah-Hartman,
	Rafał Miłecki, linux-block, linux-kernel, linux-mmc,
	linux-mtd

As the MTD subsystem already acts as an NVMEM provider, emulated mtdblock
devices should not be considered NVMEM providers.

Signed-off-by: Daniel Golle <daniel@makrotopia.org>
---
 drivers/mtd/mtd_blkdevs.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
index ff18636e08897..82d5afba6a25a 100644
--- a/drivers/mtd/mtd_blkdevs.c
+++ b/drivers/mtd/mtd_blkdevs.c
@@ -362,6 +362,7 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
 		gd->flags |= GENHD_FL_NO_PART;
 	}
 
+	gd->flags |= GENHD_FL_NO_NVMEM;
 	set_capacity(gd, ((u64)new->size * tr->blksize) >> 9);
 
 	/* Create the request queue */
-- 
2.41.0

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [RFC PATCH 5/6] mtd: ubi: block: set GENHD_FL_NO_NVMEM
  2023-07-19 22:01 [RFC PATCH 0/6] nvmem: add block device NVMEM provider Daniel Golle
                   ` (3 preceding siblings ...)
  2023-07-19 22:03 ` [RFC PATCH 4/6] mtd: blkdevs: set GENHD_FL_NO_NVMEM Daniel Golle
@ 2023-07-19 22:03 ` Daniel Golle
  2023-07-19 22:04 ` [RFC PATCH 6/6] block: implement NVMEM provider Daniel Golle
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Daniel Golle @ 2023-07-19 22:03 UTC (permalink / raw)
  To: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Daniel Golle, Damien Le Moal,
	Ming Lei, Min Li, Christian Loehle, Adrian Hunter,
	Hannes Reinecke, Jack Wang, Florian Fainelli, Yeqi Fu,
	Avri Altman, Hans de Goede, Ye Bin, Greg Kroah-Hartman,
	Rafał Miłecki, linux-block, linux-kernel, linux-mmc,
	linux-mtd

As the UBI volumes may already act as a NVMEM providers, emulated
ubiblock devices should not be considered NVMEM providers.

Signed-off-by: Daniel Golle <daniel@makrotopia.org>
---
 drivers/mtd/ubi/block.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/mtd/ubi/block.c b/drivers/mtd/ubi/block.c
index 1d5148371991b..62d9461395b37 100644
--- a/drivers/mtd/ubi/block.c
+++ b/drivers/mtd/ubi/block.c
@@ -410,7 +410,7 @@ int ubiblock_create(struct ubi_volume_info *vi)
 		ret = -ENODEV;
 		goto out_cleanup_disk;
 	}
-	gd->flags |= GENHD_FL_NO_PART;
+	gd->flags |= GENHD_FL_NO_PART | GENHD_FL_NO_NVMEM;
 	gd->private_data = dev;
 	sprintf(gd->disk_name, "ubiblock%d_%d", dev->ubi_num, dev->vol_id);
 	set_capacity(gd, disk_capacity);
-- 
2.41.0

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [RFC PATCH 6/6] block: implement NVMEM provider
  2023-07-19 22:01 [RFC PATCH 0/6] nvmem: add block device NVMEM provider Daniel Golle
                   ` (4 preceding siblings ...)
  2023-07-19 22:03 ` [RFC PATCH 5/6] mtd: ubi: block: " Daniel Golle
@ 2023-07-19 22:04 ` Daniel Golle
  2023-07-19 23:04   ` Damien Le Moal
                     ` (2 more replies)
  2023-07-20  7:09 ` [RFC PATCH 0/6] nvmem: add block device " Christoph Hellwig
                   ` (2 subsequent siblings)
  8 siblings, 3 replies; 34+ messages in thread
From: Daniel Golle @ 2023-07-19 22:04 UTC (permalink / raw)
  To: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Daniel Golle, Damien Le Moal,
	Ming Lei, Min Li, Christian Loehle, Adrian Hunter,
	Hannes Reinecke, Jack Wang, Florian Fainelli, Yeqi Fu,
	Avri Altman, Hans de Goede, Ye Bin, Greg Kroah-Hartman,
	Rafał Miłecki, linux-block, linux-kernel, linux-mmc,
	linux-mtd

On embedded devices using an eMMC it is common that one or more partitions
on the eMMC are used to store MAC addresses and Wi-Fi calibration EEPROM
data. Allow referencing the partition in device tree for the kernel and
Wi-Fi drivers accessing it via the NVMEM layer.

Signed-off-by: Daniel Golle <daniel@makrotopia.org>
---
 block/Kconfig           |   8 ++
 block/Makefile          |   1 +
 block/blk-nvmem.c       | 187 ++++++++++++++++++++++++++++++++++++++++
 block/blk.h             |  13 +++
 block/genhd.c           |   2 +
 block/partitions/core.c |   2 +
 6 files changed, 213 insertions(+)
 create mode 100644 block/blk-nvmem.c

diff --git a/block/Kconfig b/block/Kconfig
index 86122e459fe04..185573877964d 100644
--- a/block/Kconfig
+++ b/block/Kconfig
@@ -218,6 +218,14 @@ config BLK_MQ_VIRTIO
 config BLK_PM
 	def_bool PM
 
+config BLK_NVMEM
+	bool "Block device NVMEM provider"
+	depends on OF
+	help
+	  Allow block devices (or partitions) to act as NVMEM prodivers,
+	  typically using if an eMMC is used to store MAC addresses or Wi-Fi
+	  calibration data on embedded devices.
+
 # do not use in new code
 config BLOCK_HOLDER_DEPRECATED
 	bool
diff --git a/block/Makefile b/block/Makefile
index 46ada9dc8bbfe..03c0bfa8642df 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -34,6 +34,7 @@ obj-$(CONFIG_BLK_DEV_ZONED)	+= blk-zoned.o
 obj-$(CONFIG_BLK_WBT)		+= blk-wbt.o
 obj-$(CONFIG_BLK_DEBUG_FS)	+= blk-mq-debugfs.o
 obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o
+obj-$(CONFIG_BLK_NVMEM)		+= blk-nvmem.o
 obj-$(CONFIG_BLK_SED_OPAL)	+= sed-opal.o
 obj-$(CONFIG_BLK_PM)		+= blk-pm.o
 obj-$(CONFIG_BLK_INLINE_ENCRYPTION)	+= blk-crypto.o blk-crypto-profile.o \
diff --git a/block/blk-nvmem.c b/block/blk-nvmem.c
new file mode 100644
index 0000000000000..8238511049f56
--- /dev/null
+++ b/block/blk-nvmem.c
@@ -0,0 +1,187 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * block device NVMEM provider
+ *
+ * Copyright (c) 2023 Daniel Golle <daniel@makrotopia.org>
+ *
+ * Useful on devices using a partition on an eMMC for MAC addresses or
+ * Wi-Fi calibration EEPROM data.
+ */
+
+#include "blk.h"
+#include <linux/nvmem-provider.h>
+#include <linux/of.h>
+#include <linux/pagemap.h>
+#include <linux/property.h>
+
+/* List of all NVMEM devices */
+static LIST_HEAD(nvmem_devices);
+static DEFINE_MUTEX(devices_mutex);
+
+struct blk_nvmem {
+	struct nvmem_device *nvmem;
+	struct block_device *bdev;
+	struct list_head list;
+};
+
+static int blk_nvmem_reg_read(void *priv, unsigned int from,
+			      void *val, size_t bytes)
+{
+	pgoff_t f_index = from >> PAGE_SHIFT;
+	struct address_space *mapping;
+	struct blk_nvmem *bnv = priv;
+	size_t bytes_left = bytes;
+	struct folio *folio;
+	unsigned long offs, to_read;
+	void *p;
+
+	if (!bnv->bdev)
+		return -ENODEV;
+
+	offs = from & ((1 << PAGE_SHIFT) - 1);
+	mapping = bnv->bdev->bd_inode->i_mapping;
+
+	while (bytes_left) {
+		folio = read_mapping_folio(mapping, f_index++, NULL);
+		if (IS_ERR(folio))
+			return PTR_ERR(folio);
+
+		to_read = min_t(unsigned long, bytes_left, PAGE_SIZE - offs);
+		p = folio_address(folio) + offset_in_folio(folio, offs);
+		memcpy(val, p, to_read);
+		offs = 0;
+		bytes_left -= to_read;
+		val += to_read;
+		folio_put(folio);
+	}
+
+	return bytes_left == 0 ? 0 : -EIO;
+}
+
+void blk_register_nvmem(struct block_device *bdev)
+{
+	struct fwnode_handle *fw_parts = NULL, *fw_part_c, *fw_part = NULL;
+	struct nvmem_config config = {};
+	const char *partname, *uuid;
+	struct device *dev, *p0dev;
+	struct blk_nvmem *bnv;
+	u32 reg;
+
+	/*
+	 * skip devices which set GENHD_FL_NO_NVMEM
+	 *
+	 * This flag is used for mtdblock and ubiblock devices because
+	 * both, MTD and UBI already implement their own NVMEM provider.
+	 * To avoid registering multiple NVMEM providers for the same
+	 * device node, skip the block NVMEM provider.
+	 */
+	if (bdev->bd_disk->flags & GENHD_FL_NO_NVMEM)
+		return;
+
+	/* skip too large devices */
+	if (bdev_nr_bytes(bdev) > INT_MAX)
+		return;
+
+	dev = &bdev->bd_device;
+	if (!bdev_is_partition(bdev)) {
+		fw_part = dev->fwnode;
+
+		if (!fw_part && dev->parent)
+			fw_part = dev->parent->fwnode;
+
+		goto no_parts;
+	}
+
+	p0dev = &bdev->bd_disk->part0->bd_device;
+	fw_parts = device_get_named_child_node(p0dev, "partitions");
+	if (!fw_parts)
+		fw_parts = device_get_named_child_node(p0dev->parent, "partitions");
+
+	if (!fw_parts)
+		return;
+
+	fwnode_for_each_child_node(fw_parts, fw_part_c) {
+		if (!fwnode_property_read_string(fw_part_c, "uuid", &uuid) &&
+		    (!bdev->bd_meta_info || strncmp(uuid,
+						    bdev->bd_meta_info->uuid,
+						    PARTITION_META_INFO_UUIDLTH)))
+			continue;
+
+		if (!fwnode_property_read_string(fw_part_c, "partname", &partname) &&
+		    (!bdev->bd_meta_info || strncmp(partname,
+						    bdev->bd_meta_info->volname,
+						    PARTITION_META_INFO_VOLNAMELTH)))
+			continue;
+
+		/*
+		 * partition addresses (reg) in device tree greater than
+		 * DISK_MAX_PARTS can be used to match uuid or partname only
+		 */
+		if (!fwnode_property_read_u32(fw_part_c, "reg", &reg) &&
+		    reg < DISK_MAX_PARTS && bdev->bd_partno != reg)
+			continue;
+
+		fw_part = fw_part_c;
+		break;
+	}
+
+no_parts:
+	if (!fwnode_device_is_compatible(fw_part, "nvmem-cells"))
+		return;
+
+	bnv = kzalloc(sizeof(struct blk_nvmem), GFP_KERNEL);
+	if (!bnv)
+		return;
+
+	config.id = NVMEM_DEVID_NONE;
+	config.dev = &bdev->bd_device;
+	config.name = dev_name(&bdev->bd_device);
+	config.owner = THIS_MODULE;
+	config.priv = bnv;
+	config.reg_read = blk_nvmem_reg_read;
+	config.size = bdev_nr_bytes(bdev);
+	config.word_size = 1;
+	config.stride = 1;
+	config.read_only = true;
+	config.root_only = true;
+	config.ignore_wp = true;
+	config.of_node = to_of_node(fw_part);
+
+	bnv->bdev = bdev;
+	bnv->nvmem = nvmem_register(&config);
+	if (IS_ERR(bnv->nvmem)) {
+		/* Just ignore if there is no NVMEM support in the kernel */
+		if (PTR_ERR(bnv->nvmem) != -EOPNOTSUPP)
+			dev_err_probe(&bdev->bd_device, PTR_ERR(bnv->nvmem),
+				      "Failed to register NVMEM device\n");
+
+		kfree(bnv);
+		return;
+	}
+
+	mutex_lock(&devices_mutex);
+	list_add_tail(&bnv->list, &nvmem_devices);
+	mutex_unlock(&devices_mutex);
+}
+
+void blk_unregister_nvmem(struct block_device *bdev)
+{
+	struct blk_nvmem *bnv_c, *bnv = NULL;
+
+	mutex_lock(&devices_mutex);
+	list_for_each_entry(bnv_c, &nvmem_devices, list)
+		if (bnv_c->bdev == bdev) {
+			bnv = bnv_c;
+			break;
+		}
+
+	if (!bnv) {
+		mutex_unlock(&devices_mutex);
+		return;
+	}
+
+	list_del(&bnv->list);
+	mutex_unlock(&devices_mutex);
+	nvmem_unregister(bnv->nvmem);
+	kfree(bnv);
+}
diff --git a/block/blk.h b/block/blk.h
index 686712e138352..7423d0d5494e9 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -515,4 +515,17 @@ static inline int req_ref_read(struct request *req)
 	return atomic_read(&req->ref);
 }
 
+#ifdef CONFIG_BLK_NVMEM
+void blk_register_nvmem(struct block_device *bdev);
+void blk_unregister_nvmem(struct block_device *bdev);
+#else
+static inline void blk_register_nvmem(struct block_device *bdev)
+{
+}
+
+static inline void blk_unregister_nvmem(struct block_device *bdev)
+{
+}
+#endif
+
 #endif /* BLK_INTERNAL_H */
diff --git a/block/genhd.c b/block/genhd.c
index 3d287b32d50df..b306e0f407bb2 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -527,6 +527,7 @@ int __must_check device_add_disk(struct device *parent, struct gendisk *disk,
 	disk_update_readahead(disk);
 	disk_add_events(disk);
 	set_bit(GD_ADDED, &disk->state);
+	blk_register_nvmem(disk->part0);
 	return 0;
 
 out_unregister_bdi:
@@ -569,6 +570,7 @@ static void blk_report_disk_dead(struct gendisk *disk)
 		if (bdev->bd_holder_ops && bdev->bd_holder_ops->mark_dead)
 			bdev->bd_holder_ops->mark_dead(bdev);
 		mutex_unlock(&bdev->bd_holder_lock);
+		blk_unregister_nvmem(bdev);
 
 		put_device(&bdev->bd_device);
 		rcu_read_lock();
diff --git a/block/partitions/core.c b/block/partitions/core.c
index 13a7341299a91..68bd655f5e68e 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -404,6 +404,8 @@ static struct block_device *add_partition(struct gendisk *disk, int partno,
 	/* suppress uevent if the disk suppresses it */
 	if (!dev_get_uevent_suppress(ddev))
 		kobject_uevent(&pdev->kobj, KOBJ_ADD);
+
+	blk_register_nvmem(bdev);
 	return bdev;
 
 out_del:
-- 
2.41.0

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 6/6] block: implement NVMEM provider
  2023-07-19 22:04 ` [RFC PATCH 6/6] block: implement NVMEM provider Daniel Golle
@ 2023-07-19 23:04   ` Damien Le Moal
  2023-07-20  0:14     ` Daniel Golle
  2023-07-20  7:04   ` Christoph Hellwig
  2023-07-21  6:35   ` Christoph Hellwig
  2 siblings, 1 reply; 34+ messages in thread
From: Damien Le Moal @ 2023-07-19 23:04 UTC (permalink / raw)
  To: Daniel Golle, Jens Axboe, Ulf Hansson, Miquel Raynal,
	Richard Weinberger, Vignesh Raghavendra, Dave Chinner,
	Matthew Wilcox, Thomas Weißschuh, Jan Kara, Ming Lei,
	Min Li, Christian Loehle, Adrian Hunter, Hannes Reinecke,
	Jack Wang, Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede,
	Ye Bin, Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd

On 7/20/23 07:04, Daniel Golle wrote:
> On embedded devices using an eMMC it is common that one or more partitions
> on the eMMC are used to store MAC addresses and Wi-Fi calibration EEPROM
> data. Allow referencing the partition in device tree for the kernel and
> Wi-Fi drivers accessing it via the NVMEM layer.
> 
> Signed-off-by: Daniel Golle <daniel@makrotopia.org>
> ---
>  block/Kconfig           |   8 ++
>  block/Makefile          |   1 +
>  block/blk-nvmem.c       | 187 ++++++++++++++++++++++++++++++++++++++++
>  block/blk.h             |  13 +++
>  block/genhd.c           |   2 +
>  block/partitions/core.c |   2 +
>  6 files changed, 213 insertions(+)
>  create mode 100644 block/blk-nvmem.c
> 
> diff --git a/block/Kconfig b/block/Kconfig
> index 86122e459fe04..185573877964d 100644
> --- a/block/Kconfig
> +++ b/block/Kconfig
> @@ -218,6 +218,14 @@ config BLK_MQ_VIRTIO
>  config BLK_PM
>  	def_bool PM
>  
> +config BLK_NVMEM
> +	bool "Block device NVMEM provider"
> +	depends on OF
> +	help
> +	  Allow block devices (or partitions) to act as NVMEM prodivers,
> +	  typically using if an eMMC is used to store MAC addresses or Wi-Fi

Odd grammar... May be "typically used with eMMC to store ..."

> +	  calibration data on embedded devices.
> +
>  # do not use in new code
>  config BLOCK_HOLDER_DEPRECATED
>  	bool
> diff --git a/block/Makefile b/block/Makefile
> index 46ada9dc8bbfe..03c0bfa8642df 100644
> --- a/block/Makefile
> +++ b/block/Makefile
> @@ -34,6 +34,7 @@ obj-$(CONFIG_BLK_DEV_ZONED)	+= blk-zoned.o
>  obj-$(CONFIG_BLK_WBT)		+= blk-wbt.o
>  obj-$(CONFIG_BLK_DEBUG_FS)	+= blk-mq-debugfs.o
>  obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o
> +obj-$(CONFIG_BLK_NVMEM)		+= blk-nvmem.o
>  obj-$(CONFIG_BLK_SED_OPAL)	+= sed-opal.o
>  obj-$(CONFIG_BLK_PM)		+= blk-pm.o
>  obj-$(CONFIG_BLK_INLINE_ENCRYPTION)	+= blk-crypto.o blk-crypto-profile.o \
> diff --git a/block/blk-nvmem.c b/block/blk-nvmem.c
> new file mode 100644
> index 0000000000000..8238511049f56
> --- /dev/null
> +++ b/block/blk-nvmem.c
> @@ -0,0 +1,187 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * block device NVMEM provider
> + *
> + * Copyright (c) 2023 Daniel Golle <daniel@makrotopia.org>
> + *
> + * Useful on devices using a partition on an eMMC for MAC addresses or
> + * Wi-Fi calibration EEPROM data.
> + */
> +
> +#include "blk.h"
> +#include <linux/nvmem-provider.h>
> +#include <linux/of.h>
> +#include <linux/pagemap.h>
> +#include <linux/property.h>
> +
> +/* List of all NVMEM devices */
> +static LIST_HEAD(nvmem_devices);
> +static DEFINE_MUTEX(devices_mutex);
> +
> +struct blk_nvmem {
> +	struct nvmem_device *nvmem;
> +	struct block_device *bdev;
> +	struct list_head list;
> +};
> +
> +static int blk_nvmem_reg_read(void *priv, unsigned int from,
> +			      void *val, size_t bytes)
> +{
> +	pgoff_t f_index = from >> PAGE_SHIFT;
> +	struct address_space *mapping;
> +	struct blk_nvmem *bnv = priv;

Why not have bnv passed as argument directly ?

> +	size_t bytes_left = bytes;
> +	struct folio *folio;
> +	unsigned long offs, to_read;
> +	void *p;
> +
> +	if (!bnv->bdev)
> +		return -ENODEV;
> +
> +	offs = from & ((1 << PAGE_SHIFT) - 1);

offs = from & PAGE_MASK;

from being an int is really odd though.

> +	mapping = bnv->bdev->bd_inode->i_mapping;
> +
> +	while (bytes_left) {
> +		folio = read_mapping_folio(mapping, f_index++, NULL);
> +		if (IS_ERR(folio))
> +			return PTR_ERR(folio);
> +
> +		to_read = min_t(unsigned long, bytes_left, PAGE_SIZE - offs);
> +		p = folio_address(folio) + offset_in_folio(folio, offs);
> +		memcpy(val, p, to_read);
> +		offs = 0;
> +		bytes_left -= to_read;
> +		val += to_read;
> +		folio_put(folio);
> +	}
> +
> +	return bytes_left == 0 ? 0 : -EIO;

How can bytes_left be 0 here given the above loop with no break ?

> +}
> +
> +void blk_register_nvmem(struct block_device *bdev)
> +{
> +	struct fwnode_handle *fw_parts = NULL, *fw_part_c, *fw_part = NULL;
> +	struct nvmem_config config = {};
> +	const char *partname, *uuid;
> +	struct device *dev, *p0dev;
> +	struct blk_nvmem *bnv;
> +	u32 reg;
> +
> +	/*
> +	 * skip devices which set GENHD_FL_NO_NVMEM
> +	 *
> +	 * This flag is used for mtdblock and ubiblock devices because
> +	 * both, MTD and UBI already implement their own NVMEM provider.
> +	 * To avoid registering multiple NVMEM providers for the same
> +	 * device node, skip the block NVMEM provider.
> +	 */
> +	if (bdev->bd_disk->flags & GENHD_FL_NO_NVMEM)
> +		return;
> +
> +	/* skip too large devices */

Why ? Is that defined in some standards somewhere ?

> +	if (bdev_nr_bytes(bdev) > INT_MAX)
> +		return;
> +
> +	dev = &bdev->bd_device;
> +	if (!bdev_is_partition(bdev)) {
> +		fw_part = dev->fwnode;
> +
> +		if (!fw_part && dev->parent)
> +			fw_part = dev->parent->fwnode;
> +
> +		goto no_parts;
> +	}
> +
> +	p0dev = &bdev->bd_disk->part0->bd_device;
> +	fw_parts = device_get_named_child_node(p0dev, "partitions");
> +	if (!fw_parts)
> +		fw_parts = device_get_named_child_node(p0dev->parent, "partitions");
> +
> +	if (!fw_parts)
> +		return;
> +
> +	fwnode_for_each_child_node(fw_parts, fw_part_c) {
> +		if (!fwnode_property_read_string(fw_part_c, "uuid", &uuid) &&
> +		    (!bdev->bd_meta_info || strncmp(uuid,
> +						    bdev->bd_meta_info->uuid,
> +						    PARTITION_META_INFO_UUIDLTH)))
> +			continue;
> +
> +		if (!fwnode_property_read_string(fw_part_c, "partname", &partname) &&
> +		    (!bdev->bd_meta_info || strncmp(partname,
> +						    bdev->bd_meta_info->volname,
> +						    PARTITION_META_INFO_VOLNAMELTH)))
> +			continue;
> +
> +		/*
> +		 * partition addresses (reg) in device tree greater than
> +		 * DISK_MAX_PARTS can be used to match uuid or partname only
> +		 */
> +		if (!fwnode_property_read_u32(fw_part_c, "reg", &reg) &&
> +		    reg < DISK_MAX_PARTS && bdev->bd_partno != reg)
> +			continue;
> +
> +		fw_part = fw_part_c;
> +		break;
> +	}
> +
> +no_parts:
> +	if (!fwnode_device_is_compatible(fw_part, "nvmem-cells"))
> +		return;
> +
> +	bnv = kzalloc(sizeof(struct blk_nvmem), GFP_KERNEL);
> +	if (!bnv)
> +		return;
> +
> +	config.id = NVMEM_DEVID_NONE;
> +	config.dev = &bdev->bd_device;
> +	config.name = dev_name(&bdev->bd_device);
> +	config.owner = THIS_MODULE;
> +	config.priv = bnv;
> +	config.reg_read = blk_nvmem_reg_read;
> +	config.size = bdev_nr_bytes(bdev);
> +	config.word_size = 1;
> +	config.stride = 1;
> +	config.read_only = true;
> +	config.root_only = true;
> +	config.ignore_wp = true;
> +	config.of_node = to_of_node(fw_part);
> +
> +	bnv->bdev = bdev;
> +	bnv->nvmem = nvmem_register(&config);
> +	if (IS_ERR(bnv->nvmem)) {
> +		/* Just ignore if there is no NVMEM support in the kernel */

If there is not, why would this function even be called ?

> +		if (PTR_ERR(bnv->nvmem) != -EOPNOTSUPP)
> +			dev_err_probe(&bdev->bd_device, PTR_ERR(bnv->nvmem),
> +				      "Failed to register NVMEM device\n");
> +
> +		kfree(bnv);
> +		return;
> +	}
> +
> +	mutex_lock(&devices_mutex);
> +	list_add_tail(&bnv->list, &nvmem_devices);
> +	mutex_unlock(&devices_mutex);
> +}
> +
> +void blk_unregister_nvmem(struct block_device *bdev)
> +{
> +	struct blk_nvmem *bnv_c, *bnv = NULL;
> +
> +	mutex_lock(&devices_mutex);
> +	list_for_each_entry(bnv_c, &nvmem_devices, list)
> +		if (bnv_c->bdev == bdev) {
> +			bnv = bnv_c;
> +			break;
> +		}

Curly brackets for list_for_each_entry() {} would be nice, even though they are
not strictly necessary in this case.

> +
> +	if (!bnv) {
> +		mutex_unlock(&devices_mutex);
> +		return;
> +	}
> +
> +	list_del(&bnv->list);
> +	mutex_unlock(&devices_mutex);
> +	nvmem_unregister(bnv->nvmem);
> +	kfree(bnv);
> +}
> diff --git a/block/blk.h b/block/blk.h
> index 686712e138352..7423d0d5494e9 100644
> --- a/block/blk.h
> +++ b/block/blk.h
> @@ -515,4 +515,17 @@ static inline int req_ref_read(struct request *req)
>  	return atomic_read(&req->ref);
>  }
>  
> +#ifdef CONFIG_BLK_NVMEM
> +void blk_register_nvmem(struct block_device *bdev);
> +void blk_unregister_nvmem(struct block_device *bdev);
> +#else
> +static inline void blk_register_nvmem(struct block_device *bdev)
> +{
> +}

These could go at the end of the static inline line.

> +
> +static inline void blk_unregister_nvmem(struct block_device *bdev)
> +{
> +}
> +#endif
> +
>  #endif /* BLK_INTERNAL_H */
> diff --git a/block/genhd.c b/block/genhd.c
> index 3d287b32d50df..b306e0f407bb2 100644
> --- a/block/genhd.c
> +++ b/block/genhd.c
> @@ -527,6 +527,7 @@ int __must_check device_add_disk(struct device *parent, struct gendisk *disk,
>  	disk_update_readahead(disk);
>  	disk_add_events(disk);
>  	set_bit(GD_ADDED, &disk->state);
> +	blk_register_nvmem(disk->part0);
>  	return 0;
>  
>  out_unregister_bdi:
> @@ -569,6 +570,7 @@ static void blk_report_disk_dead(struct gendisk *disk)
>  		if (bdev->bd_holder_ops && bdev->bd_holder_ops->mark_dead)
>  			bdev->bd_holder_ops->mark_dead(bdev);
>  		mutex_unlock(&bdev->bd_holder_lock);
> +		blk_unregister_nvmem(bdev);
>  
>  		put_device(&bdev->bd_device);
>  		rcu_read_lock();
> diff --git a/block/partitions/core.c b/block/partitions/core.c
> index 13a7341299a91..68bd655f5e68e 100644
> --- a/block/partitions/core.c
> +++ b/block/partitions/core.c
> @@ -404,6 +404,8 @@ static struct block_device *add_partition(struct gendisk *disk, int partno,
>  	/* suppress uevent if the disk suppresses it */
>  	if (!dev_get_uevent_suppress(ddev))
>  		kobject_uevent(&pdev->kobj, KOBJ_ADD);
> +
> +	blk_register_nvmem(bdev);
>  	return bdev;
>  
>  out_del:

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 6/6] block: implement NVMEM provider
  2023-07-19 23:04   ` Damien Le Moal
@ 2023-07-20  0:14     ` Daniel Golle
  0 siblings, 0 replies; 34+ messages in thread
From: Daniel Golle @ 2023-07-20  0:14 UTC (permalink / raw)
  To: Damien Le Moal
  Cc: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Ming Lei, Min Li,
	Christian Loehle, Adrian Hunter, Hannes Reinecke, Jack Wang,
	Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede, Ye Bin,
	Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd

On Thu, Jul 20, 2023 at 08:04:56AM +0900, Damien Le Moal wrote:
> On 7/20/23 07:04, Daniel Golle wrote:
> > On embedded devices using an eMMC it is common that one or more partitions
> > on the eMMC are used to store MAC addresses and Wi-Fi calibration EEPROM
> > data. Allow referencing the partition in device tree for the kernel and
> > Wi-Fi drivers accessing it via the NVMEM layer.
> > 
> > Signed-off-by: Daniel Golle <daniel@makrotopia.org>
> > ---
> >  block/Kconfig           |   8 ++
> >  block/Makefile          |   1 +
> >  block/blk-nvmem.c       | 187 ++++++++++++++++++++++++++++++++++++++++
> >  block/blk.h             |  13 +++
> >  block/genhd.c           |   2 +
> >  block/partitions/core.c |   2 +
> >  6 files changed, 213 insertions(+)
> >  create mode 100644 block/blk-nvmem.c
> > 
> > diff --git a/block/Kconfig b/block/Kconfig
> > index 86122e459fe04..185573877964d 100644
> > --- a/block/Kconfig
> > +++ b/block/Kconfig
> > @@ -218,6 +218,14 @@ config BLK_MQ_VIRTIO
> >  config BLK_PM
> >  	def_bool PM
> >  
> > +config BLK_NVMEM
> > +	bool "Block device NVMEM provider"
> > +	depends on OF
> > +	help
> > +	  Allow block devices (or partitions) to act as NVMEM prodivers,
> > +	  typically using if an eMMC is used to store MAC addresses or Wi-Fi
> 
> Odd grammar... May be "typically used with eMMC to store ..."

Thank you, I'll use your suggestion.

> 
> > +	  calibration data on embedded devices.
> > +
> >  # do not use in new code
> >  config BLOCK_HOLDER_DEPRECATED
> >  	bool
> > diff --git a/block/Makefile b/block/Makefile
> > index 46ada9dc8bbfe..03c0bfa8642df 100644
> > --- a/block/Makefile
> > +++ b/block/Makefile
> > @@ -34,6 +34,7 @@ obj-$(CONFIG_BLK_DEV_ZONED)	+= blk-zoned.o
> >  obj-$(CONFIG_BLK_WBT)		+= blk-wbt.o
> >  obj-$(CONFIG_BLK_DEBUG_FS)	+= blk-mq-debugfs.o
> >  obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o
> > +obj-$(CONFIG_BLK_NVMEM)		+= blk-nvmem.o
> >  obj-$(CONFIG_BLK_SED_OPAL)	+= sed-opal.o
> >  obj-$(CONFIG_BLK_PM)		+= blk-pm.o
> >  obj-$(CONFIG_BLK_INLINE_ENCRYPTION)	+= blk-crypto.o blk-crypto-profile.o \
> > diff --git a/block/blk-nvmem.c b/block/blk-nvmem.c
> > new file mode 100644
> > index 0000000000000..8238511049f56
> > --- /dev/null
> > +++ b/block/blk-nvmem.c
> > @@ -0,0 +1,187 @@
> > +// SPDX-License-Identifier: GPL-2.0-or-later
> > +/*
> > + * block device NVMEM provider
> > + *
> > + * Copyright (c) 2023 Daniel Golle <daniel@makrotopia.org>
> > + *
> > + * Useful on devices using a partition on an eMMC for MAC addresses or
> > + * Wi-Fi calibration EEPROM data.
> > + */
> > +
> > +#include "blk.h"
> > +#include <linux/nvmem-provider.h>
> > +#include <linux/of.h>
> > +#include <linux/pagemap.h>
> > +#include <linux/property.h>
> > +
> > +/* List of all NVMEM devices */
> > +static LIST_HEAD(nvmem_devices);
> > +static DEFINE_MUTEX(devices_mutex);
> > +
> > +struct blk_nvmem {
> > +	struct nvmem_device *nvmem;
> > +	struct block_device *bdev;
> > +	struct list_head list;
> > +};
> > +
> > +static int blk_nvmem_reg_read(void *priv, unsigned int from,
> > +			      void *val, size_t bytes)
> > +{
> > +	pgoff_t f_index = from >> PAGE_SHIFT;
> > +	struct address_space *mapping;
> > +	struct blk_nvmem *bnv = priv;
> 
> Why not have bnv passed as argument directly ?

Because blk_nvmem_reg_read is used as reg_read function, ie.
nvmem_reg_read_t {aka 'int (*)(void *, unsigned int,  void *, long unsigned int)'}

Hence 'void *' has to be implicitely type-casted into 'struct blk_nvmem *'.

> 
> > +	size_t bytes_left = bytes;
> > +	struct folio *folio;
> > +	unsigned long offs, to_read;
> > +	void *p;
> > +
> > +	if (!bnv->bdev)
> > +		return -ENODEV;
> > +
> > +	offs = from & ((1 << PAGE_SHIFT) - 1);
> 
> offs = from & PAGE_MASK;

Right, that makes it easier...

> 
> from being an int is really odd though.

Yep, but that's how NVMEM framework defines it.

> 
> > +	mapping = bnv->bdev->bd_inode->i_mapping;
> > +
> > +	while (bytes_left) {
> > +		folio = read_mapping_folio(mapping, f_index++, NULL);
> > +		if (IS_ERR(folio))
> > +			return PTR_ERR(folio);
> > +
> > +		to_read = min_t(unsigned long, bytes_left, PAGE_SIZE - offs);
> > +		p = folio_address(folio) + offset_in_folio(folio, offs);
> > +		memcpy(val, p, to_read);
> > +		offs = 0;
> > +		bytes_left -= to_read;
> > +		val += to_read;
> > +		folio_put(folio);
> > +	}
> > +
> > +	return bytes_left == 0 ? 0 : -EIO;
> 
> How can bytes_left be 0 here given the above loop with no break ?

Well, right... Can just be 'return 0;' at this point obviously.

> 
> > +}
> > +
> > +void blk_register_nvmem(struct block_device *bdev)
> > +{
> > +	struct fwnode_handle *fw_parts = NULL, *fw_part_c, *fw_part = NULL;
> > +	struct nvmem_config config = {};
> > +	const char *partname, *uuid;
> > +	struct device *dev, *p0dev;
> > +	struct blk_nvmem *bnv;
> > +	u32 reg;
> > +
> > +	/*
> > +	 * skip devices which set GENHD_FL_NO_NVMEM
> > +	 *
> > +	 * This flag is used for mtdblock and ubiblock devices because
> > +	 * both, MTD and UBI already implement their own NVMEM provider.
> > +	 * To avoid registering multiple NVMEM providers for the same
> > +	 * device node, skip the block NVMEM provider.
> > +	 */
> > +	if (bdev->bd_disk->flags & GENHD_FL_NO_NVMEM)
> > +		return;
> > +
> > +	/* skip too large devices */
> 
> Why ? Is that defined in some standards somewhere ?

NVMEM framework uses 'int' to address byte offsets inside NVMEM devices.
Hence devices larger than INT_MAX cannot be addressed, I will also state
that in that comment.

> 
> > +	if (bdev_nr_bytes(bdev) > INT_MAX)
> > +		return;
> > +
> > +	dev = &bdev->bd_device;
> > +	if (!bdev_is_partition(bdev)) {
> > +		fw_part = dev->fwnode;
> > +
> > +		if (!fw_part && dev->parent)
> > +			fw_part = dev->parent->fwnode;
> > +
> > +		goto no_parts;
> > +	}
> > +
> > +	p0dev = &bdev->bd_disk->part0->bd_device;
> > +	fw_parts = device_get_named_child_node(p0dev, "partitions");
> > +	if (!fw_parts)
> > +		fw_parts = device_get_named_child_node(p0dev->parent, "partitions");
> > +
> > +	if (!fw_parts)
> > +		return;
> > +
> > +	fwnode_for_each_child_node(fw_parts, fw_part_c) {
> > +		if (!fwnode_property_read_string(fw_part_c, "uuid", &uuid) &&
> > +		    (!bdev->bd_meta_info || strncmp(uuid,
> > +						    bdev->bd_meta_info->uuid,
> > +						    PARTITION_META_INFO_UUIDLTH)))
> > +			continue;
> > +
> > +		if (!fwnode_property_read_string(fw_part_c, "partname", &partname) &&
> > +		    (!bdev->bd_meta_info || strncmp(partname,
> > +						    bdev->bd_meta_info->volname,
> > +						    PARTITION_META_INFO_VOLNAMELTH)))
> > +			continue;
> > +
> > +		/*
> > +		 * partition addresses (reg) in device tree greater than
> > +		 * DISK_MAX_PARTS can be used to match uuid or partname only
> > +		 */
> > +		if (!fwnode_property_read_u32(fw_part_c, "reg", &reg) &&
> > +		    reg < DISK_MAX_PARTS && bdev->bd_partno != reg)
> > +			continue;
> > +
> > +		fw_part = fw_part_c;
> > +		break;
> > +	}
> > +
> > +no_parts:
> > +	if (!fwnode_device_is_compatible(fw_part, "nvmem-cells"))
> > +		return;
> > +
> > +	bnv = kzalloc(sizeof(struct blk_nvmem), GFP_KERNEL);
> > +	if (!bnv)
> > +		return;
> > +
> > +	config.id = NVMEM_DEVID_NONE;
> > +	config.dev = &bdev->bd_device;
> > +	config.name = dev_name(&bdev->bd_device);
> > +	config.owner = THIS_MODULE;
> > +	config.priv = bnv;
> > +	config.reg_read = blk_nvmem_reg_read;
> > +	config.size = bdev_nr_bytes(bdev);
> > +	config.word_size = 1;
> > +	config.stride = 1;
> > +	config.read_only = true;
> > +	config.root_only = true;
> > +	config.ignore_wp = true;
> > +	config.of_node = to_of_node(fw_part);
> > +
> > +	bnv->bdev = bdev;
> > +	bnv->nvmem = nvmem_register(&config);
> > +	if (IS_ERR(bnv->nvmem)) {
> > +		/* Just ignore if there is no NVMEM support in the kernel */
> 
> If there is not, why would this function even be called ?

True. Kconfig depends should make sure blk-nvmem is only built if
nvmem is supported at all.

> 
> > +		if (PTR_ERR(bnv->nvmem) != -EOPNOTSUPP)
> > +			dev_err_probe(&bdev->bd_device, PTR_ERR(bnv->nvmem),
> > +				      "Failed to register NVMEM device\n");
> > +
> > +		kfree(bnv);
> > +		return;
> > +	}
> > +
> > +	mutex_lock(&devices_mutex);
> > +	list_add_tail(&bnv->list, &nvmem_devices);
> > +	mutex_unlock(&devices_mutex);
> > +}
> > +
> > +void blk_unregister_nvmem(struct block_device *bdev)
> > +{
> > +	struct blk_nvmem *bnv_c, *bnv = NULL;
> > +
> > +	mutex_lock(&devices_mutex);
> > +	list_for_each_entry(bnv_c, &nvmem_devices, list)
> > +		if (bnv_c->bdev == bdev) {
> > +			bnv = bnv_c;
> > +			break;
> > +		}
> 
> Curly brackets for list_for_each_entry() {} would be nice, even though they are
> not strictly necessary in this case.

Ack, will add them.

> 
> > +
> > +	if (!bnv) {
> > +		mutex_unlock(&devices_mutex);
> > +		return;
> > +	}
> > +
> > +	list_del(&bnv->list);
> > +	mutex_unlock(&devices_mutex);
> > +	nvmem_unregister(bnv->nvmem);
> > +	kfree(bnv);
> > +}
> > diff --git a/block/blk.h b/block/blk.h
> > index 686712e138352..7423d0d5494e9 100644
> > --- a/block/blk.h
> > +++ b/block/blk.h
> > @@ -515,4 +515,17 @@ static inline int req_ref_read(struct request *req)
> >  	return atomic_read(&req->ref);
> >  }
> >  
> > +#ifdef CONFIG_BLK_NVMEM
> > +void blk_register_nvmem(struct block_device *bdev);
> > +void blk_unregister_nvmem(struct block_device *bdev);
> > +#else
> > +static inline void blk_register_nvmem(struct block_device *bdev)
> > +{
> > +}
> 
> These could go at the end of the static inline line.

I tried keeping the existing style in that header file.
See a couple of lines above:
static inline void bio_integrity_free(struct bio *bio)
{
}

> 
> > +
> > +static inline void blk_unregister_nvmem(struct block_device *bdev)
> > +{
> > +}
> > +#endif
> > +
> >  #endif /* BLK_INTERNAL_H */
> > diff --git a/block/genhd.c b/block/genhd.c
> > index 3d287b32d50df..b306e0f407bb2 100644
> > --- a/block/genhd.c
> > +++ b/block/genhd.c
> > @@ -527,6 +527,7 @@ int __must_check device_add_disk(struct device *parent, struct gendisk *disk,
> >  	disk_update_readahead(disk);
> >  	disk_add_events(disk);
> >  	set_bit(GD_ADDED, &disk->state);
> > +	blk_register_nvmem(disk->part0);
> >  	return 0;
> >  
> >  out_unregister_bdi:
> > @@ -569,6 +570,7 @@ static void blk_report_disk_dead(struct gendisk *disk)
> >  		if (bdev->bd_holder_ops && bdev->bd_holder_ops->mark_dead)
> >  			bdev->bd_holder_ops->mark_dead(bdev);
> >  		mutex_unlock(&bdev->bd_holder_lock);
> > +		blk_unregister_nvmem(bdev);
> >  
> >  		put_device(&bdev->bd_device);
> >  		rcu_read_lock();
> > diff --git a/block/partitions/core.c b/block/partitions/core.c
> > index 13a7341299a91..68bd655f5e68e 100644
> > --- a/block/partitions/core.c
> > +++ b/block/partitions/core.c
> > @@ -404,6 +404,8 @@ static struct block_device *add_partition(struct gendisk *disk, int partno,
> >  	/* suppress uevent if the disk suppresses it */
> >  	if (!dev_get_uevent_suppress(ddev))
> >  		kobject_uevent(&pdev->kobj, KOBJ_ADD);
> > +
> > +	blk_register_nvmem(bdev);
> >  	return bdev;
> >  
> >  out_del:
> 
> -- 
> Damien Le Moal
> Western Digital Research
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 4/6] mtd: blkdevs: set GENHD_FL_NO_NVMEM
  2023-07-19 22:03 ` [RFC PATCH 4/6] mtd: blkdevs: set GENHD_FL_NO_NVMEM Daniel Golle
@ 2023-07-20  6:04   ` Miquel Raynal
  0 siblings, 0 replies; 34+ messages in thread
From: Miquel Raynal @ 2023-07-20  6:04 UTC (permalink / raw)
  To: Daniel Golle
  Cc: Jens Axboe, Ulf Hansson, Richard Weinberger, Vignesh Raghavendra,
	Dave Chinner, Matthew Wilcox, Thomas Weißschuh, Jan Kara,
	Damien Le Moal, Ming Lei, Min Li, Christian Loehle,
	Adrian Hunter, Hannes Reinecke, Jack Wang, Florian Fainelli,
	Yeqi Fu, Avri Altman, Hans de Goede, Ye Bin, Greg Kroah-Hartman,
	Rafał Miłecki, linux-block, linux-kernel, linux-mmc,
	linux-mtd

Hi Daniel,

daniel@makrotopia.org wrote on Wed, 19 Jul 2023 23:03:24 +0100:

> As the MTD subsystem already acts as an NVMEM provider, emulated mtdblock
> devices should not be considered NVMEM providers.
> 
> Signed-off-by: Daniel Golle <daniel@makrotopia.org>
> ---
>  drivers/mtd/mtd_blkdevs.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
> index ff18636e08897..82d5afba6a25a 100644
> --- a/drivers/mtd/mtd_blkdevs.c
> +++ b/drivers/mtd/mtd_blkdevs.c
> @@ -362,6 +362,7 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
>  		gd->flags |= GENHD_FL_NO_PART;
>  	}
>  
> +	gd->flags |= GENHD_FL_NO_NVMEM;
>  	set_capacity(gd, ((u64)new->size * tr->blksize) >> 9);
>  
>  	/* Create the request queue */

Acked-by: Miquel Raynal <miquel.raynal@bootlin.com>

Thanks,
Miquèl

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 6/6] block: implement NVMEM provider
  2023-07-19 22:04 ` [RFC PATCH 6/6] block: implement NVMEM provider Daniel Golle
  2023-07-19 23:04   ` Damien Le Moal
@ 2023-07-20  7:04   ` Christoph Hellwig
  2023-07-20 16:02     ` Daniel Golle
  2023-07-21  6:35   ` Christoph Hellwig
  2 siblings, 1 reply; 34+ messages in thread
From: Christoph Hellwig @ 2023-07-20  7:04 UTC (permalink / raw)
  To: Daniel Golle
  Cc: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Damien Le Moal, Ming Lei,
	Min Li, Christian Loehle, Adrian Hunter, Hannes Reinecke,
	Jack Wang, Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede,
	Ye Bin, Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd

The layering here is exactly the wrong way around.  This block device
as nvmem provide has not business sitting in the block layer and being
keyed ff the gendisk registration.  Instead you should create a new
nvmem backed that opens the block device as needed if it fits your
OF description without any changes to the core block layer.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 0/6] nvmem: add block device NVMEM provider
  2023-07-19 22:01 [RFC PATCH 0/6] nvmem: add block device NVMEM provider Daniel Golle
                   ` (5 preceding siblings ...)
  2023-07-19 22:04 ` [RFC PATCH 6/6] block: implement NVMEM provider Daniel Golle
@ 2023-07-20  7:09 ` Christoph Hellwig
  2023-07-20 15:30 ` Bart Van Assche
  2023-07-21  8:32 ` Ming Lei
  8 siblings, 0 replies; 34+ messages in thread
From: Christoph Hellwig @ 2023-07-20  7:09 UTC (permalink / raw)
  To: Daniel Golle
  Cc: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Damien Le Moal, Ming Lei,
	Min Li, Christian Loehle, Adrian Hunter, Hannes Reinecke,
	Jack Wang, Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede,
	Ye Bin, Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd

On Wed, Jul 19, 2023 at 11:01:14PM +0100, Daniel Golle wrote:
> This series is meant to open the discussion on how exactly the device tree
> schema for block devices and partitions may look like, and even if using
> the block layer to back the NVMEM device is at all the way to go -- to me
> it seemed to be a good solution because it will be reuable e.g. for NVMe.

If only NVMe did something sane there.  NVMe copied the completely
idiotic idea of boot partitions from eMMC/SD, and made it even worse
by first requiring bit banged register access for them, and now adding
a weird admin command to read them, which is not bound to a block
device.  If we want to support that in NVMe we'll need code in the
nvme core, but I hope we can avoid it and no one sane (that is
everyone but the completely clueless BIOS people at Intel) will never
use this completely fucked up boot partition concept.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 3/6] block: add new genhd flag GENHD_FL_NO_NVMEM
  2023-07-19 22:03 ` [RFC PATCH 3/6] block: add new genhd flag GENHD_FL_NO_NVMEM Daniel Golle
@ 2023-07-20  8:24   ` Hannes Reinecke
  2023-07-20 13:47     ` Daniel Golle
  0 siblings, 1 reply; 34+ messages in thread
From: Hannes Reinecke @ 2023-07-20  8:24 UTC (permalink / raw)
  To: Daniel Golle, Jens Axboe, Ulf Hansson, Miquel Raynal,
	Richard Weinberger, Vignesh Raghavendra, Dave Chinner,
	Matthew Wilcox, Thomas Weißschuh, Jan Kara, Damien Le Moal,
	Ming Lei, Min Li, Christian Loehle, Adrian Hunter, Jack Wang,
	Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede, Ye Bin,
	Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd

On 7/20/23 00:03, Daniel Golle wrote:
> Add new flag to destinguish block devices which should not act as an
> NVMEM provider, such as for example an emulated block device on top of
> an MTD partition which already acts as an NVMEM provider itself.
> 
> Signed-off-by: Daniel Golle <daniel@makrotopia.org>
> ---
>   include/linux/blkdev.h | 3 +++
>   1 file changed, 3 insertions(+)
> 
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index 2f5371b8482c0..e853d1815be15 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -80,11 +80,14 @@ struct partition_meta_info {
>    * ``GENHD_FL_NO_PART``: partition support is disabled.  The kernel will not
>    * scan for partitions from add_disk, and users can't add partitions manually.
>    *
> + * ``GENHD_FL_NO_NVMEM``: NVMEM emulation is disabled.  The kernel will not
> + * emulate an NVMEM device on top of this disk.
>    */
>   enum {
>   	GENHD_FL_REMOVABLE			= 1 << 0,
>   	GENHD_FL_HIDDEN				= 1 << 1,
>   	GENHD_FL_NO_PART			= 1 << 2,
> +	GENHD_FL_NO_NVMEM			= 1 << 3,
>   };
>   
>   enum {
Please reverse this flag. Most of the devices will not have an NVMEM 
partition, and we shouldn't require each and every driver to tag their 
devices.
So please use GENHD_FL_NVMEM and only set this flag on devices which 
really have an NVMEM partition.

Cheers,

Hannes


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 3/6] block: add new genhd flag GENHD_FL_NO_NVMEM
  2023-07-20  8:24   ` Hannes Reinecke
@ 2023-07-20 13:47     ` Daniel Golle
  2023-07-20 14:03       ` Hannes Reinecke
  0 siblings, 1 reply; 34+ messages in thread
From: Daniel Golle @ 2023-07-20 13:47 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Damien Le Moal, Ming Lei,
	Min Li, Christian Loehle, Adrian Hunter, Jack Wang,
	Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede, Ye Bin,
	Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd

On Thu, Jul 20, 2023 at 10:24:18AM +0200, Hannes Reinecke wrote:
> On 7/20/23 00:03, Daniel Golle wrote:
> > Add new flag to destinguish block devices which should not act as an
> > NVMEM provider, such as for example an emulated block device on top of
> > an MTD partition which already acts as an NVMEM provider itself.
> > 
> > Signed-off-by: Daniel Golle <daniel@makrotopia.org>
> > ---
> >   include/linux/blkdev.h | 3 +++
> >   1 file changed, 3 insertions(+)
> > 
> > diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> > index 2f5371b8482c0..e853d1815be15 100644
> > --- a/include/linux/blkdev.h
> > +++ b/include/linux/blkdev.h
> > @@ -80,11 +80,14 @@ struct partition_meta_info {
> >    * ``GENHD_FL_NO_PART``: partition support is disabled.  The kernel will not
> >    * scan for partitions from add_disk, and users can't add partitions manually.
> >    *
> > + * ``GENHD_FL_NO_NVMEM``: NVMEM emulation is disabled.  The kernel will not
> > + * emulate an NVMEM device on top of this disk.
> >    */
> >   enum {
> >   	GENHD_FL_REMOVABLE			= 1 << 0,
> >   	GENHD_FL_HIDDEN				= 1 << 1,
> >   	GENHD_FL_NO_PART			= 1 << 2,
> > +	GENHD_FL_NO_NVMEM			= 1 << 3,
> >   };
> >   enum {
> Please reverse this flag. Most of the devices will not have an NVMEM
> partition, and we shouldn't require each and every driver to tag their
> devices.
> So please use GENHD_FL_NVMEM and only set this flag on devices which really
> have an NVMEM partition.

The idea here was to exclude all those devices which already implement
an NVMEM provider on a lower layer themselves, such as MTD.
In this cases it would be ambigous if the OF node represents the
NVMEM device registered by the MTD framework or if blk-nvmem should be
used.

In all other cases device tree can unambigously indicate whether a
block device should serve as NVMEM provider (and right, most of them
never will).

However, reversing the logic seems fine just as well.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 3/6] block: add new genhd flag GENHD_FL_NO_NVMEM
  2023-07-20 13:47     ` Daniel Golle
@ 2023-07-20 14:03       ` Hannes Reinecke
  2023-07-20 14:28         ` Daniel Golle
  0 siblings, 1 reply; 34+ messages in thread
From: Hannes Reinecke @ 2023-07-20 14:03 UTC (permalink / raw)
  To: Daniel Golle
  Cc: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Damien Le Moal, Ming Lei,
	Min Li, Christian Loehle, Adrian Hunter, Jack Wang,
	Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede, Ye Bin,
	Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd

On 7/20/23 15:47, Daniel Golle wrote:
> On Thu, Jul 20, 2023 at 10:24:18AM +0200, Hannes Reinecke wrote:
>> On 7/20/23 00:03, Daniel Golle wrote:
>>> Add new flag to destinguish block devices which should not act as an
>>> NVMEM provider, such as for example an emulated block device on top of
>>> an MTD partition which already acts as an NVMEM provider itself.
>>>
>>> Signed-off-by: Daniel Golle <daniel@makrotopia.org>
>>> ---
>>>    include/linux/blkdev.h | 3 +++
>>>    1 file changed, 3 insertions(+)
>>>
>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
>>> index 2f5371b8482c0..e853d1815be15 100644
>>> --- a/include/linux/blkdev.h
>>> +++ b/include/linux/blkdev.h
>>> @@ -80,11 +80,14 @@ struct partition_meta_info {
>>>     * ``GENHD_FL_NO_PART``: partition support is disabled.  The kernel will not
>>>     * scan for partitions from add_disk, and users can't add partitions manually.
>>>     *
>>> + * ``GENHD_FL_NO_NVMEM``: NVMEM emulation is disabled.  The kernel will not
>>> + * emulate an NVMEM device on top of this disk.
>>>     */
>>>    enum {
>>>    	GENHD_FL_REMOVABLE			= 1 << 0,
>>>    	GENHD_FL_HIDDEN				= 1 << 1,
>>>    	GENHD_FL_NO_PART			= 1 << 2,
>>> +	GENHD_FL_NO_NVMEM			= 1 << 3,
>>>    };
>>>    enum {
>> Please reverse this flag. Most of the devices will not have an NVMEM
>> partition, and we shouldn't require each and every driver to tag their
>> devices.
>> So please use GENHD_FL_NVMEM and only set this flag on devices which really
>> have an NVMEM partition.
> 
> The idea here was to exclude all those devices which already implement
> an NVMEM provider on a lower layer themselves, such as MTD.
> In this cases it would be ambigous if the OF node represents the
> NVMEM device registered by the MTD framework or if blk-nvmem should be
> used.
> 
Hmm; not sure if I follow.
In the end, it doesn't really matter whether you check for
GENHD_FL_NO_NVMEM or !GENHD_FL_NVMEM.
With the difference being that in the former case you have to
tag 99% of all existing block devices, and in the latter you
have to tag 1%.

> In all other cases device tree can unambigously indicate whether a
> block device should serve as NVMEM provider (and right, most of them
> never will).
> 
> However, reversing the logic seems fine just as well.

Thanks. Please do.

Cheers,

Hannes


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 3/6] block: add new genhd flag GENHD_FL_NO_NVMEM
  2023-07-20 14:03       ` Hannes Reinecke
@ 2023-07-20 14:28         ` Daniel Golle
  2023-07-20 14:34           ` Hannes Reinecke
  0 siblings, 1 reply; 34+ messages in thread
From: Daniel Golle @ 2023-07-20 14:28 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Damien Le Moal, Ming Lei,
	Min Li, Christian Loehle, Adrian Hunter, Jack Wang,
	Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede, Ye Bin,
	Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd

On Thu, Jul 20, 2023 at 04:03:22PM +0200, Hannes Reinecke wrote:
> On 7/20/23 15:47, Daniel Golle wrote:
> > On Thu, Jul 20, 2023 at 10:24:18AM +0200, Hannes Reinecke wrote:
> > > On 7/20/23 00:03, Daniel Golle wrote:
> > > > Add new flag to destinguish block devices which should not act as an
> > > > NVMEM provider, such as for example an emulated block device on top of
> > > > an MTD partition which already acts as an NVMEM provider itself.
> > > > 
> > > > Signed-off-by: Daniel Golle <daniel@makrotopia.org>
> > > > ---
> > > >    include/linux/blkdev.h | 3 +++
> > > >    1 file changed, 3 insertions(+)
> > > > 
> > > > diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> > > > index 2f5371b8482c0..e853d1815be15 100644
> > > > --- a/include/linux/blkdev.h
> > > > +++ b/include/linux/blkdev.h
> > > > @@ -80,11 +80,14 @@ struct partition_meta_info {
> > > >     * ``GENHD_FL_NO_PART``: partition support is disabled.  The kernel will not
> > > >     * scan for partitions from add_disk, and users can't add partitions manually.
> > > >     *
> > > > + * ``GENHD_FL_NO_NVMEM``: NVMEM emulation is disabled.  The kernel will not
> > > > + * emulate an NVMEM device on top of this disk.
> > > >     */
> > > >    enum {
> > > >    	GENHD_FL_REMOVABLE			= 1 << 0,
> > > >    	GENHD_FL_HIDDEN				= 1 << 1,
> > > >    	GENHD_FL_NO_PART			= 1 << 2,
> > > > +	GENHD_FL_NO_NVMEM			= 1 << 3,
> > > >    };
> > > >    enum {
> > > Please reverse this flag. Most of the devices will not have an NVMEM
> > > partition, and we shouldn't require each and every driver to tag their
> > > devices.
> > > So please use GENHD_FL_NVMEM and only set this flag on devices which really
> > > have an NVMEM partition.
> > 
> > The idea here was to exclude all those devices which already implement
> > an NVMEM provider on a lower layer themselves, such as MTD.
> > In this cases it would be ambigous if the OF node represents the
> > NVMEM device registered by the MTD framework or if blk-nvmem should be
> > used.
> > 
> Hmm; not sure if I follow.
> In the end, it doesn't really matter whether you check for
> GENHD_FL_NO_NVMEM or !GENHD_FL_NVMEM.
> With the difference being that in the former case you have to
> tag 99% of all existing block devices, and in the latter you
> have to tag 1%.

That's not exactly true. In the current case I only have to flag MTD
(and UBI in future, I'm working on a UBI NVMEM provider as well) with
GENHD_FL_NO_NVMEM, so a 'compatible = "nvmem-cells"' in the
corresponding device tree node should not result in blk-nvmem creating
an NVMEM device based on the (mtd/ubi)block device, simply because the
MTD framework (and UBI in future) will already have created their own
NVMEM device attached to the very same device tree node.

In all other cases of block devices, the compatible string can be used
to unambigously decide whether an NVMEM device should be created or
not. blk-nvmem is opt-in, so unless the device is flagged by
'compatible = "nvmem-cells"' it will not do anything.

For all devices which anyway do not have any device tree representation
it won't do anything (think: loop, nbd, ...), we would not need to opt
them out using GENHD_FL_NO_NVMEM. Also all other drivers which do not
already bring their own NVMEM implementation won't need GENHD_FL_NO_NVMEM,
the absence of 'compatible = "nvmem-cells"' is enough to indicate that
they should not be considered as NVMEM providers.

The way you are suggesting will require that, in addition to selecting
the targetted block device in device tree, the block driver will also
have to set GENHD_FL_NVMEM. Hence we will need changes in MMC, NVMe
and potentially also SATA disk drivers setting GENHD_FL_NVMEM when
registering the disk.

> 
> > In all other cases device tree can unambigously indicate whether a
> > block device should serve as NVMEM provider (and right, most of them
> > never will).
> > 
> > However, reversing the logic seems fine just as well.
> 
> Thanks. Please do.

Either way is fine with me, just 99% vs. 1% is not the right argument
in this case.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 3/6] block: add new genhd flag GENHD_FL_NO_NVMEM
  2023-07-20 14:28         ` Daniel Golle
@ 2023-07-20 14:34           ` Hannes Reinecke
  0 siblings, 0 replies; 34+ messages in thread
From: Hannes Reinecke @ 2023-07-20 14:34 UTC (permalink / raw)
  To: Daniel Golle
  Cc: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Damien Le Moal, Ming Lei,
	Min Li, Christian Loehle, Adrian Hunter, Jack Wang,
	Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede, Ye Bin,
	Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd

On 7/20/23 16:28, Daniel Golle wrote:
> On Thu, Jul 20, 2023 at 04:03:22PM +0200, Hannes Reinecke wrote:
>> On 7/20/23 15:47, Daniel Golle wrote:
>>> On Thu, Jul 20, 2023 at 10:24:18AM +0200, Hannes Reinecke wrote:
>>>> On 7/20/23 00:03, Daniel Golle wrote:
>>>>> Add new flag to destinguish block devices which should not act as an
>>>>> NVMEM provider, such as for example an emulated block device on top of
>>>>> an MTD partition which already acts as an NVMEM provider itself.
>>>>>
>>>>> Signed-off-by: Daniel Golle <daniel@makrotopia.org>
>>>>> ---
>>>>>     include/linux/blkdev.h | 3 +++
>>>>>     1 file changed, 3 insertions(+)
>>>>>
>>>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
>>>>> index 2f5371b8482c0..e853d1815be15 100644
>>>>> --- a/include/linux/blkdev.h
>>>>> +++ b/include/linux/blkdev.h
>>>>> @@ -80,11 +80,14 @@ struct partition_meta_info {
>>>>>      * ``GENHD_FL_NO_PART``: partition support is disabled.  The kernel will not
>>>>>      * scan for partitions from add_disk, and users can't add partitions manually.
>>>>>      *
>>>>> + * ``GENHD_FL_NO_NVMEM``: NVMEM emulation is disabled.  The kernel will not
>>>>> + * emulate an NVMEM device on top of this disk.
>>>>>      */
>>>>>     enum {
>>>>>     	GENHD_FL_REMOVABLE			= 1 << 0,
>>>>>     	GENHD_FL_HIDDEN				= 1 << 1,
>>>>>     	GENHD_FL_NO_PART			= 1 << 2,
>>>>> +	GENHD_FL_NO_NVMEM			= 1 << 3,
>>>>>     };
>>>>>     enum {
>>>> Please reverse this flag. Most of the devices will not have an NVMEM
>>>> partition, and we shouldn't require each and every driver to tag their
>>>> devices.
>>>> So please use GENHD_FL_NVMEM and only set this flag on devices which really
>>>> have an NVMEM partition.
>>>
>>> The idea here was to exclude all those devices which already implement
>>> an NVMEM provider on a lower layer themselves, such as MTD.
>>> In this cases it would be ambigous if the OF node represents the
>>> NVMEM device registered by the MTD framework or if blk-nvmem should be
>>> used.
>>>
>> Hmm; not sure if I follow.
>> In the end, it doesn't really matter whether you check for
>> GENHD_FL_NO_NVMEM or !GENHD_FL_NVMEM.
>> With the difference being that in the former case you have to
>> tag 99% of all existing block devices, and in the latter you
>> have to tag 1%.
> 
> That's not exactly true. In the current case I only have to flag MTD
> (and UBI in future, I'm working on a UBI NVMEM provider as well) with
> GENHD_FL_NO_NVMEM, so a 'compatible = "nvmem-cells"' in the
> corresponding device tree node should not result in blk-nvmem creating
> an NVMEM device based on the (mtd/ubi)block device, simply because the
> MTD framework (and UBI in future) will already have created their own
> NVMEM device attached to the very same device tree node.
> 
> In all other cases of block devices, the compatible string can be used
> to unambigously decide whether an NVMEM device should be created or
> not. blk-nvmem is opt-in, so unless the device is flagged by
> 'compatible = "nvmem-cells"' it will not do anything.
> 
> For all devices which anyway do not have any device tree representation
> it won't do anything (think: loop, nbd, ...), we would not need to opt
> them out using GENHD_FL_NO_NVMEM. Also all other drivers which do not
> already bring their own NVMEM implementation won't need GENHD_FL_NO_NVMEM,
> the absence of 'compatible = "nvmem-cells"' is enough to indicate that
> they should not be considered as NVMEM providers.
> 
> The way you are suggesting will require that, in addition to selecting
> the targetted block device in device tree, the block driver will also
> have to set GENHD_FL_NVMEM. Hence we will need changes in MMC, NVMe
> and potentially also SATA disk drivers setting GENHD_FL_NVMEM when
> registering the disk.
> 
That is absolutely correct, and was my intention all along.
Drivers which can (and do) supply an NVMEM partition should be required 
to set this flag, yes.

Cheers,

Hannes


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 0/6] nvmem: add block device NVMEM provider
  2023-07-19 22:01 [RFC PATCH 0/6] nvmem: add block device NVMEM provider Daniel Golle
                   ` (6 preceding siblings ...)
  2023-07-20  7:09 ` [RFC PATCH 0/6] nvmem: add block device " Christoph Hellwig
@ 2023-07-20 15:30 ` Bart Van Assche
  2023-07-20 18:03   ` Daniel Golle
  2023-07-21  8:32 ` Ming Lei
  8 siblings, 1 reply; 34+ messages in thread
From: Bart Van Assche @ 2023-07-20 15:30 UTC (permalink / raw)
  To: Daniel Golle, Jens Axboe, Ulf Hansson, Miquel Raynal,
	Richard Weinberger, Vignesh Raghavendra, Dave Chinner,
	Matthew Wilcox, Thomas Weißschuh, Jan Kara, Damien Le Moal,
	Ming Lei, Min Li, Christian Loehle, Adrian Hunter,
	Hannes Reinecke, Jack Wang, Florian Fainelli, Yeqi Fu,
	Avri Altman, Hans de Goede, Ye Bin, Greg Kroah-Hartman,
	Rafał Miłecki, linux-block, linux-kernel, linux-mmc,
	linux-mtd

On 7/19/23 15:01, Daniel Golle wrote:
> On embedded devices using an eMMC it is common that one or more (hw/sw)
> partitions on the eMMC are used to store MAC addresses and Wi-Fi
> calibration EEPROM data.
> 
> Implement an NVMEM provider backed by block devices as typically the
> NVMEM framework is used to have kernel drivers read and use binary data
> from EEPROMs, efuses, flash memory (MTD), ...
> 
> In order to be able to reference hardware partitions on an eMMC, add code
> to bind each hardware partition to a specific firmware subnode.
> 
> This series is meant to open the discussion on how exactly the device tree
> schema for block devices and partitions may look like, and even if using
> the block layer to back the NVMEM device is at all the way to go -- to me
> it seemed to be a good solution because it will be reuable e.g. for NVMe.

Is my understanding correct that these devices boot from eMMC and not over
Wi-Fi? If so, why does this calibration data have to be stored on a raw
block device? Why can't this information be loaded from a file on a
filesystem?

Thanks,

Bart.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 6/6] block: implement NVMEM provider
  2023-07-20  7:04   ` Christoph Hellwig
@ 2023-07-20 16:02     ` Daniel Golle
  2023-07-21  6:31       ` Christoph Hellwig
  0 siblings, 1 reply; 34+ messages in thread
From: Daniel Golle @ 2023-07-20 16:02 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Damien Le Moal, Ming Lei,
	Min Li, Christian Loehle, Adrian Hunter, Hannes Reinecke,
	Jack Wang, Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede,
	Ye Bin, Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd

On Thu, Jul 20, 2023 at 12:04:43AM -0700, Christoph Hellwig wrote:
> The layering here is exactly the wrong way around.  This block device
> as nvmem provide has not business sitting in the block layer and being
> keyed ff the gendisk registration.  Instead you should create a new
> nvmem backed that opens the block device as needed if it fits your
> OF description without any changes to the core block layer.
> 

Ok. I will use a class_interface instead.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 0/6] nvmem: add block device NVMEM provider
  2023-07-20 15:30 ` Bart Van Assche
@ 2023-07-20 18:03   ` Daniel Golle
  0 siblings, 0 replies; 34+ messages in thread
From: Daniel Golle @ 2023-07-20 18:03 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Damien Le Moal, Ming Lei,
	Min Li, Christian Loehle, Adrian Hunter, Hannes Reinecke,
	Jack Wang, Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede,
	Ye Bin, Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd

On Thu, Jul 20, 2023 at 08:30:39AM -0700, Bart Van Assche wrote:
> On 7/19/23 15:01, Daniel Golle wrote:
> > On embedded devices using an eMMC it is common that one or more (hw/sw)
> > partitions on the eMMC are used to store MAC addresses and Wi-Fi
> > calibration EEPROM data.
> > 
> > Implement an NVMEM provider backed by block devices as typically the
> > NVMEM framework is used to have kernel drivers read and use binary data
> > from EEPROMs, efuses, flash memory (MTD), ...
> > 
> > In order to be able to reference hardware partitions on an eMMC, add code
> > to bind each hardware partition to a specific firmware subnode.
> > 
> > This series is meant to open the discussion on how exactly the device tree
> > schema for block devices and partitions may look like, and even if using
> > the block layer to back the NVMEM device is at all the way to go -- to me
> > it seemed to be a good solution because it will be reuable e.g. for NVMe.
> 
> Is my understanding correct that these devices boot from eMMC and not over
> Wi-Fi?

Yes, that's right.

> If so, why does this calibration data have to be stored on a raw
> block device?

It's just how many vendors decided to implement it in their firmware.

I reckon this happened mostly out of habbit and in order to avoid any
potential complexities in their manufacturing and QA processes -- most
of those processes have been created in a time when NOR flash was the
most common way to store firmware on embedded devices with Wi-Fi.

The occurance of raw NAND and SPI-NAND didn't change much about that,
most vendors still just use a raw MTD partition to store all that
per-device 'factory' data. Very few have have started to use raw data
inside UBI volumes (and I've been working on a UBI NVMEM driver as
well).

Also when it comes to eMMC they just keep doing things how they were
always doing them, just that instead of a raw NOR flash, some devices
nowadays come with an eMMC. Some vendors use the 'boot1' hardware
partition to store things like MAC addresses at raw offsets there
(GL.iNet for example [1]), others use small GPT or MBR partitions on
the main hardware partition of the eMMC to store MAC addresses and
calibration blobs (for example Adtran [2]).

> Why can't this information be loaded from a file on a
> filesystem?

Having this information stored in a file on a filesystem is also an
option, of course, but out in the wild you will hardly find it being
done like that, for the reason stated above and also because having
the OS pull the Ethernet MAC addresses somewhere from a filesystem
while in early user-space will require the OS to even know about the
individual hardware and how this should be done, so it's not exactly
beautiful, may need fsck, mounting, ... all before the machine can
become available with it's assigned MAC address on Ethernet.

The RaspberryPi and other brcmfmac-based systems are the exception to
the rule, here a simple text file stored on the root filesystem serves
the Wi-Fi calibration. This file hence needs to be shipped with the OS
instead of being stored on the device, and e.g. Raspbian does so. I
suppose this is mostly because there just isn't any permanent on-board
storage large enough and being a low-barrier DYI system they wanted to
make it easy for users designing systems besed on that (as in: not
having to mess with raw offsets or partitions, but rather just use
tools like Etcher on Windows and edit CR-LF terminated text files on a
FAT filesystem inside notepad...).

However, practically all Qualcomm and MediaTek Wi-Fi AP/Routers which
come with an eMMC come with their MAC addresses and Wi-Fi EEPROM data
stored as raw data inside a hardware or software partition.

[1]: https://github.com/dangowrt/linux/commit/8a24f03efb6b1dd7d8fdf68558146bacc71c2c1b
[2]: https://github.com/dangowrt/linux/commit/b14c0215641047d0447956b40bd344049a11becb

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 6/6] block: implement NVMEM provider
  2023-07-20 16:02     ` Daniel Golle
@ 2023-07-21  6:31       ` Christoph Hellwig
  2023-07-21 10:40         ` Daniel Golle
  0 siblings, 1 reply; 34+ messages in thread
From: Christoph Hellwig @ 2023-07-21  6:31 UTC (permalink / raw)
  To: Daniel Golle
  Cc: Christoph Hellwig, Jens Axboe, Ulf Hansson, Miquel Raynal,
	Richard Weinberger, Vignesh Raghavendra, Dave Chinner,
	Matthew Wilcox, Thomas Weißschuh, Jan Kara, Damien Le Moal,
	Ming Lei, Min Li, Christian Loehle, Adrian Hunter,
	Hannes Reinecke, Jack Wang, Florian Fainelli, Yeqi Fu,
	Avri Altman, Hans de Goede, Ye Bin, Greg Kroah-Hartman,
	Rafał Miłecki, linux-block, linux-kernel, linux-mmc,
	linux-mtd

On Thu, Jul 20, 2023 at 05:02:32PM +0100, Daniel Golle wrote:
> On Thu, Jul 20, 2023 at 12:04:43AM -0700, Christoph Hellwig wrote:
> > The layering here is exactly the wrong way around.  This block device
> > as nvmem provide has not business sitting in the block layer and being
> > keyed ff the gendisk registration.  Instead you should create a new
> > nvmem backed that opens the block device as needed if it fits your
> > OF description without any changes to the core block layer.
> > 
> 
> Ok. I will use a class_interface instead.

I'm not sure a class_interface makes much sense here.  Why does the
block layer even need to know about you using a device a nvmem provider?
As far as I can tell your provider should layer entirely above the
block layer and not have to be integrated with it.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 6/6] block: implement NVMEM provider
  2023-07-19 22:04 ` [RFC PATCH 6/6] block: implement NVMEM provider Daniel Golle
  2023-07-19 23:04   ` Damien Le Moal
  2023-07-20  7:04   ` Christoph Hellwig
@ 2023-07-21  6:35   ` Christoph Hellwig
  2023-07-21 13:31     ` Daniel Golle
  2 siblings, 1 reply; 34+ messages in thread
From: Christoph Hellwig @ 2023-07-21  6:35 UTC (permalink / raw)
  To: Daniel Golle
  Cc: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Damien Le Moal, Ming Lei,
	Min Li, Christian Loehle, Adrian Hunter, Hannes Reinecke,
	Jack Wang, Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede,
	Ye Bin, Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd

> +static int blk_nvmem_reg_read(void *priv, unsigned int from,
> +			      void *val, size_t bytes)
> +{
> +	pgoff_t f_index = from >> PAGE_SHIFT;
> +	struct address_space *mapping;
> +	struct blk_nvmem *bnv = priv;
> +	size_t bytes_left = bytes;
> +	struct folio *folio;
> +	unsigned long offs, to_read;
> +	void *p;

Btw, this function really should be using kern_read on a file
using filp_open instead of poking into block device internals.
That way you can even have a generic file provider that works
on anything that can be read from.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 0/6] nvmem: add block device NVMEM provider
  2023-07-19 22:01 [RFC PATCH 0/6] nvmem: add block device NVMEM provider Daniel Golle
                   ` (7 preceding siblings ...)
  2023-07-20 15:30 ` Bart Van Assche
@ 2023-07-21  8:32 ` Ming Lei
  2023-07-21 11:08   ` Daniel Golle
  2023-07-21 11:10   ` Greg Kroah-Hartman
  8 siblings, 2 replies; 34+ messages in thread
From: Ming Lei @ 2023-07-21  8:32 UTC (permalink / raw)
  To: Daniel Golle
  Cc: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Damien Le Moal, Min Li,
	Christian Loehle, Adrian Hunter, Hannes Reinecke, Jack Wang,
	Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede, Ye Bin,
	Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd, ming.lei

On Wed, Jul 19, 2023 at 11:01:14PM +0100, Daniel Golle wrote:
> On embedded devices using an eMMC it is common that one or more (hw/sw)
> partitions on the eMMC are used to store MAC addresses and Wi-Fi
> calibration EEPROM data.
> 
> Implement an NVMEM provider backed by block devices as typically the
> NVMEM framework is used to have kernel drivers read and use binary data
> from EEPROMs, efuses, flash memory (MTD), ...
> 
> In order to be able to reference hardware partitions on an eMMC, add code
> to bind each hardware partition to a specific firmware subnode.
> 
> This series is meant to open the discussion on how exactly the device tree
> schema for block devices and partitions may look like, and even if using
> the block layer to back the NVMEM device is at all the way to go -- to me
> it seemed to be a good solution because it will be reuable e.g. for NVMe.

Just wondering why you don't use request_firmware() in drivers which consume
the data, then the logic can be moved out of kernel, and you needn't to deal
with device tree & block device.

Or Android doesn't support udev and initrd?


Thanks,
Ming


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 6/6] block: implement NVMEM provider
  2023-07-21  6:31       ` Christoph Hellwig
@ 2023-07-21 10:40         ` Daniel Golle
  2023-07-21 11:11           ` Greg Kroah-Hartman
  0 siblings, 1 reply; 34+ messages in thread
From: Daniel Golle @ 2023-07-21 10:40 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Damien Le Moal, Ming Lei,
	Min Li, Christian Loehle, Adrian Hunter, Hannes Reinecke,
	Jack Wang, Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede,
	Ye Bin, Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd

On Thu, Jul 20, 2023 at 11:31:06PM -0700, Christoph Hellwig wrote:
> On Thu, Jul 20, 2023 at 05:02:32PM +0100, Daniel Golle wrote:
> > On Thu, Jul 20, 2023 at 12:04:43AM -0700, Christoph Hellwig wrote:
> > > The layering here is exactly the wrong way around.  This block device
> > > as nvmem provide has not business sitting in the block layer and being
> > > keyed ff the gendisk registration.  Instead you should create a new
> > > nvmem backed that opens the block device as needed if it fits your
> > > OF description without any changes to the core block layer.
> > > 
> > 
> > Ok. I will use a class_interface instead.
> 
> I'm not sure a class_interface makes much sense here.  Why does the
> block layer even need to know about you using a device a nvmem provider?

It doesn't. But it has to notify the nvmem providing driver about the
addition of new block devices. This is what I'm using class_interface
for, simply to hook into .add_dev of the block_class.

> As far as I can tell your provider should layer entirely above the
> block layer and not have to be integrated with it.

My approach using class_interface doesn't require any changes to be
made to existing block code. However, it does use block_class. If
you see any other good option to implement matching off and usage of
block devices by in-kernel users, please let me know.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 0/6] nvmem: add block device NVMEM provider
  2023-07-21  8:32 ` Ming Lei
@ 2023-07-21 11:08   ` Daniel Golle
  2023-07-21 11:10   ` Greg Kroah-Hartman
  1 sibling, 0 replies; 34+ messages in thread
From: Daniel Golle @ 2023-07-21 11:08 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Damien Le Moal, Min Li,
	Christian Loehle, Adrian Hunter, Hannes Reinecke, Jack Wang,
	Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede, Ye Bin,
	Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd

On Fri, Jul 21, 2023 at 04:32:49PM +0800, Ming Lei wrote:
> On Wed, Jul 19, 2023 at 11:01:14PM +0100, Daniel Golle wrote:
> > On embedded devices using an eMMC it is common that one or more (hw/sw)
> > partitions on the eMMC are used to store MAC addresses and Wi-Fi
> > calibration EEPROM data.
> > 
> > Implement an NVMEM provider backed by block devices as typically the
> > NVMEM framework is used to have kernel drivers read and use binary data
> > from EEPROMs, efuses, flash memory (MTD), ...
> > 
> > In order to be able to reference hardware partitions on an eMMC, add code
> > to bind each hardware partition to a specific firmware subnode.
> > 
> > This series is meant to open the discussion on how exactly the device tree
> > schema for block devices and partitions may look like, and even if using
> > the block layer to back the NVMEM device is at all the way to go -- to me
> > it seemed to be a good solution because it will be reuable e.g. for NVMe.
> 
> Just wondering why you don't use request_firmware() in drivers which consume
> the data, then the logic can be moved out of kernel, and you needn't to deal
> with device tree & block device.

The thing is: Why should the OS need to know about how to read
calibration to be fed into a wireless driver on a specific
hardware/firmware?

Or even worse: The MAC address of the main Ethernet interface? What if
for one of many possible reasons the extraction script in userland gets
broken and you would need to recover the system via SSH (because it's a
router or switch and doesn't even have any local console)?

Having information about the location of firmware artifacts be supplied
by the firmware (via device tree) seems to be the natural choice, and
also how it is done for devices booting off NOR or NAND flash by using
the NVMEM framework. Why should eMMC be the exception?

> 
> Or Android doesn't support udev and initrd?

Yes, sure, but then the OS ROM needs to know about and handle a lot of
very device-specific details in userland and if (unlike Android) the
same OS ROM should run on many devices that would mean it will have to
ship all of them in a huge initrd or root filesystem.

Also using userspace mechanics to acquire things as basic as a MAC
address seems overly complex and much more fragile than having the
firmware instruct the kernel via device tree to do so.

Hence, within the OpenWrt project, we've been working for years now to
push for Device Tree and lately for use of the NVMEM framework, exactly
to **reduce** the amount of hardware-specific knowledge in userland and
get rid of shell scripts acting as hotplug handlers for firmware
requests and such.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 0/6] nvmem: add block device NVMEM provider
  2023-07-21  8:32 ` Ming Lei
  2023-07-21 11:08   ` Daniel Golle
@ 2023-07-21 11:10   ` Greg Kroah-Hartman
  1 sibling, 0 replies; 34+ messages in thread
From: Greg Kroah-Hartman @ 2023-07-21 11:10 UTC (permalink / raw)
  To: Ming Lei
  Cc: Daniel Golle, Jens Axboe, Ulf Hansson, Miquel Raynal,
	Richard Weinberger, Vignesh Raghavendra, Dave Chinner,
	Matthew Wilcox, Thomas Weißschuh, Jan Kara, Damien Le Moal,
	Min Li, Christian Loehle, Adrian Hunter, Hannes Reinecke,
	Jack Wang, Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede,
	Ye Bin, Rafał Miłecki, linux-block, linux-kernel,
	linux-mmc, linux-mtd

On Fri, Jul 21, 2023 at 04:32:49PM +0800, Ming Lei wrote:
> On Wed, Jul 19, 2023 at 11:01:14PM +0100, Daniel Golle wrote:
> > On embedded devices using an eMMC it is common that one or more (hw/sw)
> > partitions on the eMMC are used to store MAC addresses and Wi-Fi
> > calibration EEPROM data.
> > 
> > Implement an NVMEM provider backed by block devices as typically the
> > NVMEM framework is used to have kernel drivers read and use binary data
> > from EEPROMs, efuses, flash memory (MTD), ...
> > 
> > In order to be able to reference hardware partitions on an eMMC, add code
> > to bind each hardware partition to a specific firmware subnode.
> > 
> > This series is meant to open the discussion on how exactly the device tree
> > schema for block devices and partitions may look like, and even if using
> > the block layer to back the NVMEM device is at all the way to go -- to me
> > it seemed to be a good solution because it will be reuable e.g. for NVMe.
> 
> Just wondering why you don't use request_firmware() in drivers which consume
> the data, then the logic can be moved out of kernel, and you needn't to deal
> with device tree & block device.
> 
> Or Android doesn't support udev and initrd?

It does support initrd, but not really udev last I looked.

But it does allow request_firmware() to be called at boot time, so yes,
finding out why that isn't used here would be good.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 6/6] block: implement NVMEM provider
  2023-07-21 10:40         ` Daniel Golle
@ 2023-07-21 11:11           ` Greg Kroah-Hartman
  2023-07-21 11:30             ` Daniel Golle
  0 siblings, 1 reply; 34+ messages in thread
From: Greg Kroah-Hartman @ 2023-07-21 11:11 UTC (permalink / raw)
  To: Daniel Golle
  Cc: Christoph Hellwig, Jens Axboe, Ulf Hansson, Miquel Raynal,
	Richard Weinberger, Vignesh Raghavendra, Dave Chinner,
	Matthew Wilcox, Thomas Weißschuh, Jan Kara, Damien Le Moal,
	Ming Lei, Min Li, Christian Loehle, Adrian Hunter,
	Hannes Reinecke, Jack Wang, Florian Fainelli, Yeqi Fu,
	Avri Altman, Hans de Goede, Ye Bin, Rafał Miłecki,
	linux-block, linux-kernel, linux-mmc, linux-mtd

On Fri, Jul 21, 2023 at 11:40:51AM +0100, Daniel Golle wrote:
> On Thu, Jul 20, 2023 at 11:31:06PM -0700, Christoph Hellwig wrote:
> > On Thu, Jul 20, 2023 at 05:02:32PM +0100, Daniel Golle wrote:
> > > On Thu, Jul 20, 2023 at 12:04:43AM -0700, Christoph Hellwig wrote:
> > > > The layering here is exactly the wrong way around.  This block device
> > > > as nvmem provide has not business sitting in the block layer and being
> > > > keyed ff the gendisk registration.  Instead you should create a new
> > > > nvmem backed that opens the block device as needed if it fits your
> > > > OF description without any changes to the core block layer.
> > > > 
> > > 
> > > Ok. I will use a class_interface instead.
> > 
> > I'm not sure a class_interface makes much sense here.  Why does the
> > block layer even need to know about you using a device a nvmem provider?
> 
> It doesn't. But it has to notify the nvmem providing driver about the
> addition of new block devices. This is what I'm using class_interface
> for, simply to hook into .add_dev of the block_class.

Why is this single type of block device special to require this, yet all
others do not?  Encoding this into the block layer feels like a huge
layering violation to me, why not do it how all other block drivers do
it instead?

> > As far as I can tell your provider should layer entirely above the
> > block layer and not have to be integrated with it.
> 
> My approach using class_interface doesn't require any changes to be
> made to existing block code. However, it does use block_class. If
> you see any other good option to implement matching off and usage of
> block devices by in-kernel users, please let me know.

Do not use block_class, again, that should only be for the block core to
touch.  Individual block drivers should never be poking around in it.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 6/6] block: implement NVMEM provider
  2023-07-21 11:11           ` Greg Kroah-Hartman
@ 2023-07-21 11:30             ` Daniel Golle
  2023-07-21 11:39               ` Greg Kroah-Hartman
  0 siblings, 1 reply; 34+ messages in thread
From: Daniel Golle @ 2023-07-21 11:30 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Christoph Hellwig, Jens Axboe, Ulf Hansson, Miquel Raynal,
	Richard Weinberger, Vignesh Raghavendra, Dave Chinner,
	Matthew Wilcox, Thomas Weißschuh, Jan Kara, Damien Le Moal,
	Ming Lei, Min Li, Christian Loehle, Adrian Hunter,
	Hannes Reinecke, Jack Wang, Florian Fainelli, Yeqi Fu,
	Avri Altman, Hans de Goede, Ye Bin, Rafał Miłecki,
	linux-block, linux-kernel, linux-mmc, linux-mtd

On Fri, Jul 21, 2023 at 01:11:40PM +0200, Greg Kroah-Hartman wrote:
> On Fri, Jul 21, 2023 at 11:40:51AM +0100, Daniel Golle wrote:
> > On Thu, Jul 20, 2023 at 11:31:06PM -0700, Christoph Hellwig wrote:
> > > On Thu, Jul 20, 2023 at 05:02:32PM +0100, Daniel Golle wrote:
> > > > On Thu, Jul 20, 2023 at 12:04:43AM -0700, Christoph Hellwig wrote:
> > > > > The layering here is exactly the wrong way around.  This block device
> > > > > as nvmem provide has not business sitting in the block layer and being
> > > > > keyed ff the gendisk registration.  Instead you should create a new
> > > > > nvmem backed that opens the block device as needed if it fits your
> > > > > OF description without any changes to the core block layer.
> > > > > 
> > > > 
> > > > Ok. I will use a class_interface instead.
> > > 
> > > I'm not sure a class_interface makes much sense here.  Why does the
> > > block layer even need to know about you using a device a nvmem provider?
> > 
> > It doesn't. But it has to notify the nvmem providing driver about the
> > addition of new block devices. This is what I'm using class_interface
> > for, simply to hook into .add_dev of the block_class.
> 
> Why is this single type of block device special to require this, yet all
> others do not?  Encoding this into the block layer feels like a huge
> layering violation to me, why not do it how all other block drivers do
> it instead?

I was thinkng of this as a generic solution in no way tied to one specific
type of block device. *Any* internal block device which can be used to
boot from should also be usable as NVMEM provider imho.
Just like all MTD devices also act as NVMEM providers (just in case of
block devices I'd make that opt-in via device tree).

> 
> > > As far as I can tell your provider should layer entirely above the
> > > block layer and not have to be integrated with it.
> > 
> > My approach using class_interface doesn't require any changes to be
> > made to existing block code. However, it does use block_class. If
> > you see any other good option to implement matching off and usage of
> > block devices by in-kernel users, please let me know.
> 
> Do not use block_class, again, that should only be for the block core to
> touch.  Individual block drivers should never be poking around in it.

Do I have any other options to coldplug and be notified about newly
added block devices, so the block-device-consuming driver can know
about them?
This is not a rhetoric question, I've been looking for other ways
and haven't found anything better than class_find_device or
class_interface. Using those also prevents blk-nvmem to be built as
a module, so I'd really like to find alternatives.
E.g. for MTD we got struct mtd_notifier and register_mtd_user().

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 6/6] block: implement NVMEM provider
  2023-07-21 11:30             ` Daniel Golle
@ 2023-07-21 11:39               ` Greg Kroah-Hartman
  2023-07-21 13:03                 ` Daniel Golle
  0 siblings, 1 reply; 34+ messages in thread
From: Greg Kroah-Hartman @ 2023-07-21 11:39 UTC (permalink / raw)
  To: Daniel Golle
  Cc: Christoph Hellwig, Jens Axboe, Ulf Hansson, Miquel Raynal,
	Richard Weinberger, Vignesh Raghavendra, Dave Chinner,
	Matthew Wilcox, Thomas Weißschuh, Jan Kara, Damien Le Moal,
	Ming Lei, Min Li, Christian Loehle, Adrian Hunter,
	Hannes Reinecke, Jack Wang, Florian Fainelli, Yeqi Fu,
	Avri Altman, Hans de Goede, Ye Bin, Rafał Miłecki,
	linux-block, linux-kernel, linux-mmc, linux-mtd

On Fri, Jul 21, 2023 at 12:30:10PM +0100, Daniel Golle wrote:
> On Fri, Jul 21, 2023 at 01:11:40PM +0200, Greg Kroah-Hartman wrote:
> > On Fri, Jul 21, 2023 at 11:40:51AM +0100, Daniel Golle wrote:
> > > On Thu, Jul 20, 2023 at 11:31:06PM -0700, Christoph Hellwig wrote:
> > > > On Thu, Jul 20, 2023 at 05:02:32PM +0100, Daniel Golle wrote:
> > > > > On Thu, Jul 20, 2023 at 12:04:43AM -0700, Christoph Hellwig wrote:
> > > > > > The layering here is exactly the wrong way around.  This block device
> > > > > > as nvmem provide has not business sitting in the block layer and being
> > > > > > keyed ff the gendisk registration.  Instead you should create a new
> > > > > > nvmem backed that opens the block device as needed if it fits your
> > > > > > OF description without any changes to the core block layer.
> > > > > > 
> > > > > 
> > > > > Ok. I will use a class_interface instead.
> > > > 
> > > > I'm not sure a class_interface makes much sense here.  Why does the
> > > > block layer even need to know about you using a device a nvmem provider?
> > > 
> > > It doesn't. But it has to notify the nvmem providing driver about the
> > > addition of new block devices. This is what I'm using class_interface
> > > for, simply to hook into .add_dev of the block_class.
> > 
> > Why is this single type of block device special to require this, yet all
> > others do not?  Encoding this into the block layer feels like a huge
> > layering violation to me, why not do it how all other block drivers do
> > it instead?
> 
> I was thinkng of this as a generic solution in no way tied to one specific
> type of block device. *Any* internal block device which can be used to
> boot from should also be usable as NVMEM provider imho.

Define "internal" :)

And that's all up to the boot process in userspace, the kernel doesn't
care about this.

> > > > As far as I can tell your provider should layer entirely above the
> > > > block layer and not have to be integrated with it.
> > > 
> > > My approach using class_interface doesn't require any changes to be
> > > made to existing block code. However, it does use block_class. If
> > > you see any other good option to implement matching off and usage of
> > > block devices by in-kernel users, please let me know.
> > 
> > Do not use block_class, again, that should only be for the block core to
> > touch.  Individual block drivers should never be poking around in it.
> 
> Do I have any other options to coldplug and be notified about newly
> added block devices, so the block-device-consuming driver can know
> about them?

What other options do you need?

> This is not a rhetoric question, I've been looking for other ways
> and haven't found anything better than class_find_device or
> class_interface.

Never use that, sorry, that's not for a driver to touch.

> Using those also prevents blk-nvmem to be built as
> a module, so I'd really like to find alternatives.
> E.g. for MTD we got struct mtd_notifier and register_mtd_user().

Your storage/hardware driver should be the thing that "finds block
devices" and registers them with the block class core, right?  After
that, what matters?

confused,

greg k-h

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 6/6] block: implement NVMEM provider
  2023-07-21 11:39               ` Greg Kroah-Hartman
@ 2023-07-21 13:03                 ` Daniel Golle
  0 siblings, 0 replies; 34+ messages in thread
From: Daniel Golle @ 2023-07-21 13:03 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Christoph Hellwig, Jens Axboe, Ulf Hansson, Miquel Raynal,
	Richard Weinberger, Vignesh Raghavendra, Dave Chinner,
	Matthew Wilcox, Thomas Weißschuh, Jan Kara, Damien Le Moal,
	Ming Lei, Min Li, Christian Loehle, Adrian Hunter,
	Hannes Reinecke, Jack Wang, Florian Fainelli, Yeqi Fu,
	Avri Altman, Hans de Goede, Ye Bin, Rafał Miłecki,
	linux-block, linux-kernel, linux-mmc, linux-mtd

On Fri, Jul 21, 2023 at 01:39:18PM +0200, Greg Kroah-Hartman wrote:
> On Fri, Jul 21, 2023 at 12:30:10PM +0100, Daniel Golle wrote:
> > On Fri, Jul 21, 2023 at 01:11:40PM +0200, Greg Kroah-Hartman wrote:
> > > On Fri, Jul 21, 2023 at 11:40:51AM +0100, Daniel Golle wrote:
> > > > On Thu, Jul 20, 2023 at 11:31:06PM -0700, Christoph Hellwig wrote:
> > > > > On Thu, Jul 20, 2023 at 05:02:32PM +0100, Daniel Golle wrote:
> > > > > > On Thu, Jul 20, 2023 at 12:04:43AM -0700, Christoph Hellwig wrote:
> > > > > > > The layering here is exactly the wrong way around.  This block device
> > > > > > > as nvmem provide has not business sitting in the block layer and being
> > > > > > > keyed ff the gendisk registration.  Instead you should create a new
> > > > > > > nvmem backed that opens the block device as needed if it fits your
> > > > > > > OF description without any changes to the core block layer.
> > > > > > > 
> > > > > > 
> > > > > > Ok. I will use a class_interface instead.
> > > > > 
> > > > > I'm not sure a class_interface makes much sense here.  Why does the
> > > > > block layer even need to know about you using a device a nvmem provider?
> > > > 
> > > > It doesn't. But it has to notify the nvmem providing driver about the
> > > > addition of new block devices. This is what I'm using class_interface
> > > > for, simply to hook into .add_dev of the block_class.
> > > 
> > > Why is this single type of block device special to require this, yet all
> > > others do not?  Encoding this into the block layer feels like a huge
> > > layering violation to me, why not do it how all other block drivers do
> > > it instead?
> > 
> > I was thinkng of this as a generic solution in no way tied to one specific
> > type of block device. *Any* internal block device which can be used to
> > boot from should also be usable as NVMEM provider imho.
> 
> Define "internal" :)

Exactly, it could be anything, MMC, SATA, even USB. As long as there is
a device tree node associated with the device, it could be referenced
as an NVMEM provider. And that's what this series tries to implement.

> 
> And that's all up to the boot process in userspace, the kernel doesn't
> care about this.

So lets look at any random embedded Linux router or Wi-Fi AP:

Typically we got some kind of NAND or NOR flash memory with a
hard-coded partitioning dedicating different areas in the flash to be
used for the bootloader, bootloader environment, factory data (such as
MAC addresses and Wi-Fi EEPROMs), a linux kernel, a root filesystem and
some way to store user data.

And things are not that different when using an eMMC instead of NOR
flash, from the perspective of the OS the main difference is just that
the eMMC is larger and represented as a block device and typically
MBR or GPT partitioning is used.

> 
> > > > > As far as I can tell your provider should layer entirely above the
> > > > > block layer and not have to be integrated with it.
> > > > 
> > > > My approach using class_interface doesn't require any changes to be
> > > > made to existing block code. However, it does use block_class. If
> > > > you see any other good option to implement matching off and usage of
> > > > block devices by in-kernel users, please let me know.
> > > 
> > > Do not use block_class, again, that should only be for the block core to
> > > touch.  Individual block drivers should never be poking around in it.
> > 
> > Do I have any other options to coldplug and be notified about newly
> > added block devices, so the block-device-consuming driver can know
> > about them?
> 
> What other options do you need?

Either calls for adding/removing the NVMEM devices need to be added to
block/genhd.c and block/partitions/core.c, and if that's not acceptable
(see below), then we'd need something like register_mtd_user() would be
for MTD:
A way for a driver to cold-plug existing block devices and be notified
about newly added ones.

> 
> > This is not a rhetoric question, I've been looking for other ways
> > and haven't found anything better than class_find_device or
> > class_interface.
> 
> Never use that, sorry, that's not for a driver to touch.
> 
> > Using those also prevents blk-nvmem to be built as
> > a module, so I'd really like to find alternatives.
> > E.g. for MTD we got struct mtd_notifier and register_mtd_user().
> 
> Your storage/hardware driver should be the thing that "finds block
> devices" and registers them with the block class core, right?  After
> that, what matters?

Well, I'd say: Make the device available as NVMEM provider, as it could
be needed for e.g. the Ethernet driver to know the MAC address to be
used.

This is how it is done for MTD devices here:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/mtd/mtdcore.c#n723
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/mtd/mtdcore.c#n804

... and then used for example here:
https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/tree/arch/arm64/boot/dts/marvell/armada-3720-gl-mv1000.dts#n227

The idea to use a class_interface at all is a result of the criticism
by Christoph Hellwig that the series requires changes to be made to the
block subsystem, ie. calls to register and unregister the NVMEM
provider.

Yet another way would obviously be to let the block-device-providing
driver (eg. drivers/mmc/core/block.c) register the NVMEM provider, but
that would then have to be re-implemented for every relevant block
driver which also seems wrong.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 6/6] block: implement NVMEM provider
  2023-07-21  6:35   ` Christoph Hellwig
@ 2023-07-21 13:31     ` Daniel Golle
  0 siblings, 0 replies; 34+ messages in thread
From: Daniel Golle @ 2023-07-21 13:31 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, Ulf Hansson, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Damien Le Moal, Ming Lei,
	Min Li, Christian Loehle, Adrian Hunter, Hannes Reinecke,
	Jack Wang, Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede,
	Ye Bin, Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd

On Thu, Jul 20, 2023 at 11:35:55PM -0700, Christoph Hellwig wrote:
> > +static int blk_nvmem_reg_read(void *priv, unsigned int from,
> > +			      void *val, size_t bytes)
> > +{
> > +	pgoff_t f_index = from >> PAGE_SHIFT;
> > +	struct address_space *mapping;
> > +	struct blk_nvmem *bnv = priv;
> > +	size_t bytes_left = bytes;
> > +	struct folio *folio;
> > +	unsigned long offs, to_read;
> > +	void *p;
> 
> Btw, this function really should be using kern_read on a file
> using filp_open instead of poking into block device internals.

Unfortunately filp_open requires a device inode on a filesystem to be
able to open a block device. What if the root filesystem has not yet
been mounted, or even requires NVMEM (let's say for the Ethernet MAC
address to be known for a nfsroot, nbd or iSCSI to work) to be
available first?

Can you imagine we could implement something like filp_open_bdev which
takes a (struct block_device *) as parameter instead of (const char *)?

That would allow the driver to open and read from the block device
before any filesystems (incl. /dev) become ready.

> That way you can even have a generic file provider that works
> on anything that can be read from.
> 

While this could also be useful, it doesn't fulfil the requirement of
making NVMEM available as early as possible for other drivers to use,
e.g. for the Ethernet MAC address.

And it would also heavily deviate from how things work with other types
of flash storage typically used in embedded Linux devices such as SPI
NOR or NAND flash which is represented as MTD device.

Sidenote: block2mtd is also not an option because one cannot reference
block2mtd devices in device tree, hence they cannot serve as NVMEM
providers.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 2/6] mmc: block: set fwnode of disk devices
  2023-07-19 22:02 ` [RFC PATCH 2/6] mmc: block: set fwnode of disk devices Daniel Golle
@ 2023-08-07 13:48   ` Ulf Hansson
  2023-08-08  1:02     ` Daniel Golle
  0 siblings, 1 reply; 34+ messages in thread
From: Ulf Hansson @ 2023-08-07 13:48 UTC (permalink / raw)
  To: Daniel Golle
  Cc: Jens Axboe, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Damien Le Moal, Ming Lei,
	Min Li, Christian Loehle, Adrian Hunter, Hannes Reinecke,
	Jack Wang, Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede,
	Ye Bin, Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd

On Thu, 20 Jul 2023 at 00:02, Daniel Golle <daniel@makrotopia.org> wrote:
>
> Set fwnode of disk devices to 'block', 'boot0' and 'boot1' subnodes of
> the mmc-card. This is done in preparation for having the eMMC act as
> NVMEM provider.

Sorry, but I don't quite understand what you are trying to do here.
Maybe you should re-order the patches in the series so it becomes
clear why this is needed?

Moreover, I don't see any DT docs being updated as a part of the
series, which looks like it is needed too. That would also help to
understand what you are proposing, I think.

>
> Signed-off-by: Daniel Golle <daniel@makrotopia.org>
> ---
>  drivers/mmc/core/block.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
>
> diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
> index f701efb1fa785..fc1a9f31bd253 100644
> --- a/drivers/mmc/core/block.c
> +++ b/drivers/mmc/core/block.c
> @@ -2413,6 +2413,8 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
>                                               int area_type,
>                                               unsigned int part_type)
>  {
> +       struct fwnode_handle *fwnode;
> +       struct device *ddev;
>         struct mmc_blk_data *md;
>         int devidx, ret;
>         char cap_str[10];
> @@ -2509,6 +2511,12 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
>
>         blk_queue_write_cache(md->queue.queue, cache_enabled, fua_enabled);
>
> +       ddev = disk_to_dev(md->disk);
> +       fwnode = device_get_named_child_node(subname ? md->parent->parent :
> +                                                      md->parent,
> +                                            subname ? subname : "block");
> +       ddev->fwnode = fwnode;
> +
>         string_get_size((u64)size, 512, STRING_UNITS_2,
>                         cap_str, sizeof(cap_str));
>         pr_info("%s: %s %s %s%s\n",

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 2/6] mmc: block: set fwnode of disk devices
  2023-08-07 13:48   ` Ulf Hansson
@ 2023-08-08  1:02     ` Daniel Golle
  2023-08-08 10:59       ` Ulf Hansson
  0 siblings, 1 reply; 34+ messages in thread
From: Daniel Golle @ 2023-08-08  1:02 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: Jens Axboe, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Damien Le Moal, Ming Lei,
	Min Li, Christian Loehle, Adrian Hunter, Hannes Reinecke,
	Jack Wang, Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede,
	Ye Bin, Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd

Hi Ulf,

thank you for reviewing and suggesting ways to improve this series!

On Mon, Aug 07, 2023 at 03:48:31PM +0200, Ulf Hansson wrote:
> On Thu, 20 Jul 2023 at 00:02, Daniel Golle <daniel@makrotopia.org> wrote:
> >
> > Set fwnode of disk devices to 'block', 'boot0' and 'boot1' subnodes of
> > the mmc-card. This is done in preparation for having the eMMC act as
> > NVMEM provider.
> 
> Sorry, but I don't quite understand what you are trying to do here.
> Maybe you should re-order the patches in the series so it becomes
> clear why this is needed?
> 
> Moreover, I don't see any DT docs being updated as a part of the
> series, which looks like it is needed too. That would also help to
> understand what you are proposing, I think.

I've prepared a tree on Github which now also includes commits adding
dt-bindings for block devices and partitions, so they can be referenced
as nvmem-cells provider.

The dt-schema addition supposedly explaining this specific patch:

https://github.com/dangowrt/linux/commit/b399a758f0e1c444ae9443dc80902a30de54af09

The whole tree:

https://github.com/dangowrt/linux/commits/for-nvmem-next

Most comments have been addressed, however, I still depend on using
either a class_interface *or* adding calls to add/remove the NVMEM
representation of a block device to block/genhd.c as well as
block/partitions/core.c, simply because afaik there isn't any better
way for in-kernel users of block devices to be notified about the
creation or removal of a block device.

Cheers


Daniel

> 
> >
> > Signed-off-by: Daniel Golle <daniel@makrotopia.org>
> > ---
> >  drivers/mmc/core/block.c | 8 ++++++++
> >  1 file changed, 8 insertions(+)
> >
> > diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
> > index f701efb1fa785..fc1a9f31bd253 100644
> > --- a/drivers/mmc/core/block.c
> > +++ b/drivers/mmc/core/block.c
> > @@ -2413,6 +2413,8 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
> >                                               int area_type,
> >                                               unsigned int part_type)
> >  {
> > +       struct fwnode_handle *fwnode;
> > +       struct device *ddev;
> >         struct mmc_blk_data *md;
> >         int devidx, ret;
> >         char cap_str[10];
> > @@ -2509,6 +2511,12 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
> >
> >         blk_queue_write_cache(md->queue.queue, cache_enabled, fua_enabled);
> >
> > +       ddev = disk_to_dev(md->disk);
> > +       fwnode = device_get_named_child_node(subname ? md->parent->parent :
> > +                                                      md->parent,
> > +                                            subname ? subname : "block");
> > +       ddev->fwnode = fwnode;
> > +
> >         string_get_size((u64)size, 512, STRING_UNITS_2,
> >                         cap_str, sizeof(cap_str));
> >         pr_info("%s: %s %s %s%s\n",
> 
> Kind regards
> Uffe

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC PATCH 2/6] mmc: block: set fwnode of disk devices
  2023-08-08  1:02     ` Daniel Golle
@ 2023-08-08 10:59       ` Ulf Hansson
  0 siblings, 0 replies; 34+ messages in thread
From: Ulf Hansson @ 2023-08-08 10:59 UTC (permalink / raw)
  To: Daniel Golle
  Cc: Jens Axboe, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Dave Chinner, Matthew Wilcox,
	Thomas Weißschuh, Jan Kara, Damien Le Moal, Ming Lei,
	Min Li, Christian Loehle, Adrian Hunter, Hannes Reinecke,
	Jack Wang, Florian Fainelli, Yeqi Fu, Avri Altman, Hans de Goede,
	Ye Bin, Greg Kroah-Hartman, Rafał Miłecki, linux-block,
	linux-kernel, linux-mmc, linux-mtd

On Tue, 8 Aug 2023 at 03:02, Daniel Golle <daniel@makrotopia.org> wrote:
>
> Hi Ulf,
>
> thank you for reviewing and suggesting ways to improve this series!
>
> On Mon, Aug 07, 2023 at 03:48:31PM +0200, Ulf Hansson wrote:
> > On Thu, 20 Jul 2023 at 00:02, Daniel Golle <daniel@makrotopia.org> wrote:
> > >
> > > Set fwnode of disk devices to 'block', 'boot0' and 'boot1' subnodes of
> > > the mmc-card. This is done in preparation for having the eMMC act as
> > > NVMEM provider.
> >
> > Sorry, but I don't quite understand what you are trying to do here.
> > Maybe you should re-order the patches in the series so it becomes
> > clear why this is needed?
> >
> > Moreover, I don't see any DT docs being updated as a part of the
> > series, which looks like it is needed too. That would also help to
> > understand what you are proposing, I think.
>
> I've prepared a tree on Github which now also includes commits adding
> dt-bindings for block devices and partitions, so they can be referenced
> as nvmem-cells provider.
>
> The dt-schema addition supposedly explaining this specific patch:
>
> https://github.com/dangowrt/linux/commit/b399a758f0e1c444ae9443dc80902a30de54af09
>
> The whole tree:
>
> https://github.com/dangowrt/linux/commits/for-nvmem-next

Thanks for sharing. However, allow people to review, I suggest you
post a new version with the updated DT bindings included.

The point is, we really need confirmation from the DT maintainers -
otherwise this is simply a no go.

>
> Most comments have been addressed, however, I still depend on using
> either a class_interface *or* adding calls to add/remove the NVMEM
> representation of a block device to block/genhd.c as well as
> block/partitions/core.c, simply because afaik there isn't any better
> way for in-kernel users of block devices to be notified about the
> creation or removal of a block device.

Okay, so that needs further discussions then. I will try to chim in.

[...]

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2023-08-08 18:58 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-19 22:01 [RFC PATCH 0/6] nvmem: add block device NVMEM provider Daniel Golle
2023-07-19 22:02 ` [RFC PATCH 1/6] mmc: core: set card fwnode_handle Daniel Golle
2023-07-19 22:02 ` [RFC PATCH 2/6] mmc: block: set fwnode of disk devices Daniel Golle
2023-08-07 13:48   ` Ulf Hansson
2023-08-08  1:02     ` Daniel Golle
2023-08-08 10:59       ` Ulf Hansson
2023-07-19 22:03 ` [RFC PATCH 3/6] block: add new genhd flag GENHD_FL_NO_NVMEM Daniel Golle
2023-07-20  8:24   ` Hannes Reinecke
2023-07-20 13:47     ` Daniel Golle
2023-07-20 14:03       ` Hannes Reinecke
2023-07-20 14:28         ` Daniel Golle
2023-07-20 14:34           ` Hannes Reinecke
2023-07-19 22:03 ` [RFC PATCH 4/6] mtd: blkdevs: set GENHD_FL_NO_NVMEM Daniel Golle
2023-07-20  6:04   ` Miquel Raynal
2023-07-19 22:03 ` [RFC PATCH 5/6] mtd: ubi: block: " Daniel Golle
2023-07-19 22:04 ` [RFC PATCH 6/6] block: implement NVMEM provider Daniel Golle
2023-07-19 23:04   ` Damien Le Moal
2023-07-20  0:14     ` Daniel Golle
2023-07-20  7:04   ` Christoph Hellwig
2023-07-20 16:02     ` Daniel Golle
2023-07-21  6:31       ` Christoph Hellwig
2023-07-21 10:40         ` Daniel Golle
2023-07-21 11:11           ` Greg Kroah-Hartman
2023-07-21 11:30             ` Daniel Golle
2023-07-21 11:39               ` Greg Kroah-Hartman
2023-07-21 13:03                 ` Daniel Golle
2023-07-21  6:35   ` Christoph Hellwig
2023-07-21 13:31     ` Daniel Golle
2023-07-20  7:09 ` [RFC PATCH 0/6] nvmem: add block device " Christoph Hellwig
2023-07-20 15:30 ` Bart Van Assche
2023-07-20 18:03   ` Daniel Golle
2023-07-21  8:32 ` Ming Lei
2023-07-21 11:08   ` Daniel Golle
2023-07-21 11:10   ` Greg Kroah-Hartman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).