All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHv3 0/5] genhd: register default groups with device_add_disk()
@ 2018-09-05  7:00 ` Hannes Reinecke
  0 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-09-05  7:00 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Sagi Grimberg, Keith Busch, Bart van Assche,
	linux-block, linux-nvme, Hannes Reinecke

device_add_disk() doesn't allow to register with default sysfs groups,
which introduces a race with udev as these groups have to be created after
the uevent has been sent.
This patchset updates device_add_disk() to accept a 'groups' argument to
avoid this race and updates the obvious drivers to use it.
There are some more drivers which might benefit from this (eg loop or md),
but the interface is not straightforward so I haven't attempted it.

As usual, comments and reviews are welcome.

Changes to v2:
- Fold lightnvm attributes into the generic attribute group as
  suggested by Bart

Changes to v1:
- Drop first patch
- Convert lightnvm sysfs attributes as suggested by Bart

Hannes Reinecke (5):
  block: genhd: add 'groups' argument to device_add_disk
  nvme: register ns_id attributes as default sysfs groups
  aoe: use device_add_disk_with_groups()
  zram: register default groups with device_add_disk()
  virtio-blk: modernize sysfs attribute creation

 arch/um/drivers/ubd_kern.c          |   2 +-
 block/genhd.c                       |  19 +++++--
 drivers/block/aoe/aoe.h             |   1 -
 drivers/block/aoe/aoeblk.c          |  21 +++----
 drivers/block/aoe/aoedev.c          |   1 -
 drivers/block/floppy.c              |   2 +-
 drivers/block/mtip32xx/mtip32xx.c   |   2 +-
 drivers/block/ps3disk.c             |   2 +-
 drivers/block/ps3vram.c             |   2 +-
 drivers/block/rsxx/dev.c            |   2 +-
 drivers/block/skd_main.c            |   2 +-
 drivers/block/sunvdc.c              |   2 +-
 drivers/block/virtio_blk.c          |  68 +++++++++++++----------
 drivers/block/xen-blkfront.c        |   2 +-
 drivers/block/zram/zram_drv.c       |  28 +++-------
 drivers/ide/ide-cd.c                |   2 +-
 drivers/ide/ide-gd.c                |   2 +-
 drivers/memstick/core/ms_block.c    |   2 +-
 drivers/memstick/core/mspro_block.c |   2 +-
 drivers/mmc/core/block.c            |   2 +-
 drivers/mtd/mtd_blkdevs.c           |   2 +-
 drivers/nvdimm/blk.c                |   2 +-
 drivers/nvdimm/btt.c                |   2 +-
 drivers/nvdimm/pmem.c               |   2 +-
 drivers/nvme/host/core.c            |  21 +++----
 drivers/nvme/host/lightnvm.c        | 106 +++++++++++++++---------------------
 drivers/nvme/host/multipath.c       |  15 ++---
 drivers/nvme/host/nvme.h            |  10 +---
 drivers/s390/block/dasd_genhd.c     |   2 +-
 drivers/s390/block/dcssblk.c        |   2 +-
 drivers/s390/block/scm_blk.c        |   2 +-
 drivers/scsi/sd.c                   |   2 +-
 drivers/scsi/sr.c                   |   2 +-
 include/linux/genhd.h               |   5 +-
 34 files changed, 151 insertions(+), 190 deletions(-)

-- 
2.16.4

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCHv3 0/5] genhd: register default groups with device_add_disk()
@ 2018-09-05  7:00 ` Hannes Reinecke
  0 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-09-05  7:00 UTC (permalink / raw)


device_add_disk() doesn't allow to register with default sysfs groups,
which introduces a race with udev as these groups have to be created after
the uevent has been sent.
This patchset updates device_add_disk() to accept a 'groups' argument to
avoid this race and updates the obvious drivers to use it.
There are some more drivers which might benefit from this (eg loop or md),
but the interface is not straightforward so I haven't attempted it.

As usual, comments and reviews are welcome.

Changes to v2:
- Fold lightnvm attributes into the generic attribute group as
  suggested by Bart

Changes to v1:
- Drop first patch
- Convert lightnvm sysfs attributes as suggested by Bart

Hannes Reinecke (5):
  block: genhd: add 'groups' argument to device_add_disk
  nvme: register ns_id attributes as default sysfs groups
  aoe: use device_add_disk_with_groups()
  zram: register default groups with device_add_disk()
  virtio-blk: modernize sysfs attribute creation

 arch/um/drivers/ubd_kern.c          |   2 +-
 block/genhd.c                       |  19 +++++--
 drivers/block/aoe/aoe.h             |   1 -
 drivers/block/aoe/aoeblk.c          |  21 +++----
 drivers/block/aoe/aoedev.c          |   1 -
 drivers/block/floppy.c              |   2 +-
 drivers/block/mtip32xx/mtip32xx.c   |   2 +-
 drivers/block/ps3disk.c             |   2 +-
 drivers/block/ps3vram.c             |   2 +-
 drivers/block/rsxx/dev.c            |   2 +-
 drivers/block/skd_main.c            |   2 +-
 drivers/block/sunvdc.c              |   2 +-
 drivers/block/virtio_blk.c          |  68 +++++++++++++----------
 drivers/block/xen-blkfront.c        |   2 +-
 drivers/block/zram/zram_drv.c       |  28 +++-------
 drivers/ide/ide-cd.c                |   2 +-
 drivers/ide/ide-gd.c                |   2 +-
 drivers/memstick/core/ms_block.c    |   2 +-
 drivers/memstick/core/mspro_block.c |   2 +-
 drivers/mmc/core/block.c            |   2 +-
 drivers/mtd/mtd_blkdevs.c           |   2 +-
 drivers/nvdimm/blk.c                |   2 +-
 drivers/nvdimm/btt.c                |   2 +-
 drivers/nvdimm/pmem.c               |   2 +-
 drivers/nvme/host/core.c            |  21 +++----
 drivers/nvme/host/lightnvm.c        | 106 +++++++++++++++---------------------
 drivers/nvme/host/multipath.c       |  15 ++---
 drivers/nvme/host/nvme.h            |  10 +---
 drivers/s390/block/dasd_genhd.c     |   2 +-
 drivers/s390/block/dcssblk.c        |   2 +-
 drivers/s390/block/scm_blk.c        |   2 +-
 drivers/scsi/sd.c                   |   2 +-
 drivers/scsi/sr.c                   |   2 +-
 include/linux/genhd.h               |   5 +-
 34 files changed, 151 insertions(+), 190 deletions(-)

-- 
2.16.4

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 1/5] block: genhd: add 'groups' argument to device_add_disk
  2018-09-05  7:00 ` Hannes Reinecke
@ 2018-09-05  7:00   ` Hannes Reinecke
  -1 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-09-05  7:00 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Sagi Grimberg, Keith Busch, Bart van Assche,
	linux-block, linux-nvme, Hannes Reinecke, Martin Wilck,
	Hannes Reinecke

Update device_add_disk() to take an 'groups' argument so that
individual drivers can register a device with additional sysfs
attributes.
This avoids race condition the driver would otherwise have if these
groups were to be created with sysfs_add_groups().

Signed-off-by: Martin Wilck <martin.wilck@suse.com>
Signed-off-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 arch/um/drivers/ubd_kern.c          |  2 +-
 block/genhd.c                       | 19 ++++++++++++++-----
 drivers/block/floppy.c              |  2 +-
 drivers/block/mtip32xx/mtip32xx.c   |  2 +-
 drivers/block/ps3disk.c             |  2 +-
 drivers/block/ps3vram.c             |  2 +-
 drivers/block/rsxx/dev.c            |  2 +-
 drivers/block/skd_main.c            |  2 +-
 drivers/block/sunvdc.c              |  2 +-
 drivers/block/virtio_blk.c          |  2 +-
 drivers/block/xen-blkfront.c        |  2 +-
 drivers/ide/ide-cd.c                |  2 +-
 drivers/ide/ide-gd.c                |  2 +-
 drivers/memstick/core/ms_block.c    |  2 +-
 drivers/memstick/core/mspro_block.c |  2 +-
 drivers/mmc/core/block.c            |  2 +-
 drivers/mtd/mtd_blkdevs.c           |  2 +-
 drivers/nvdimm/blk.c                |  2 +-
 drivers/nvdimm/btt.c                |  2 +-
 drivers/nvdimm/pmem.c               |  2 +-
 drivers/nvme/host/core.c            |  2 +-
 drivers/nvme/host/multipath.c       |  2 +-
 drivers/s390/block/dasd_genhd.c     |  2 +-
 drivers/s390/block/dcssblk.c        |  2 +-
 drivers/s390/block/scm_blk.c        |  2 +-
 drivers/scsi/sd.c                   |  2 +-
 drivers/scsi/sr.c                   |  2 +-
 include/linux/genhd.h               |  5 +++--
 28 files changed, 43 insertions(+), 33 deletions(-)

diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
index 83c470364dfb..6ee4c56032f7 100644
--- a/arch/um/drivers/ubd_kern.c
+++ b/arch/um/drivers/ubd_kern.c
@@ -891,7 +891,7 @@ static int ubd_disk_register(int major, u64 size, int unit,
 
 	disk->private_data = &ubd_devs[unit];
 	disk->queue = ubd_devs[unit].queue;
-	device_add_disk(parent, disk);
+	device_add_disk(parent, disk, NULL);
 
 	*disk_out = disk;
 	return 0;
diff --git a/block/genhd.c b/block/genhd.c
index 8cc719a37b32..ef0936184d69 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -567,7 +567,8 @@ static int exact_lock(dev_t devt, void *data)
 	return 0;
 }
 
-static void register_disk(struct device *parent, struct gendisk *disk)
+static void register_disk(struct device *parent, struct gendisk *disk,
+			  const struct attribute_group **groups)
 {
 	struct device *ddev = disk_to_dev(disk);
 	struct block_device *bdev;
@@ -582,6 +583,10 @@ static void register_disk(struct device *parent, struct gendisk *disk)
 	/* delay uevents, until we scanned partition table */
 	dev_set_uevent_suppress(ddev, 1);
 
+	if (groups) {
+		WARN_ON(ddev->groups);
+		ddev->groups = groups;
+	}
 	if (device_add(ddev))
 		return;
 	if (!sysfs_deprecated) {
@@ -647,6 +652,7 @@ static void register_disk(struct device *parent, struct gendisk *disk)
  * __device_add_disk - add disk information to kernel list
  * @parent: parent device for the disk
  * @disk: per-device partitioning information
+ * @groups: Additional per-device sysfs groups
  * @register_queue: register the queue if set to true
  *
  * This function registers the partitioning information in @disk
@@ -655,6 +661,7 @@ static void register_disk(struct device *parent, struct gendisk *disk)
  * FIXME: error handling
  */
 static void __device_add_disk(struct device *parent, struct gendisk *disk,
+			      const struct attribute_group **groups,
 			      bool register_queue)
 {
 	dev_t devt;
@@ -698,7 +705,7 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
 		blk_register_region(disk_devt(disk), disk->minors, NULL,
 				    exact_match, exact_lock, disk);
 	}
-	register_disk(parent, disk);
+	register_disk(parent, disk, groups);
 	if (register_queue)
 		blk_register_queue(disk);
 
@@ -712,15 +719,17 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
 	blk_integrity_add(disk);
 }
 
-void device_add_disk(struct device *parent, struct gendisk *disk)
+void device_add_disk(struct device *parent, struct gendisk *disk,
+		     const struct attribute_group **groups)
+
 {
-	__device_add_disk(parent, disk, true);
+	__device_add_disk(parent, disk, groups, true);
 }
 EXPORT_SYMBOL(device_add_disk);
 
 void device_add_disk_no_queue_reg(struct device *parent, struct gendisk *disk)
 {
-	__device_add_disk(parent, disk, false);
+	__device_add_disk(parent, disk, NULL, false);
 }
 EXPORT_SYMBOL(device_add_disk_no_queue_reg);
 
diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
index 48f622728ce6..1bc99e9dfaee 100644
--- a/drivers/block/floppy.c
+++ b/drivers/block/floppy.c
@@ -4676,7 +4676,7 @@ static int __init do_floppy_init(void)
 		/* to be cleaned up... */
 		disks[drive]->private_data = (void *)(long)drive;
 		disks[drive]->flags |= GENHD_FL_REMOVABLE;
-		device_add_disk(&floppy_device[drive].dev, disks[drive]);
+		device_add_disk(&floppy_device[drive].dev, disks[drive], NULL);
 	}
 
 	return 0;
diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
index d0666f5ce003..1d7d48d8a205 100644
--- a/drivers/block/mtip32xx/mtip32xx.c
+++ b/drivers/block/mtip32xx/mtip32xx.c
@@ -3861,7 +3861,7 @@ static int mtip_block_initialize(struct driver_data *dd)
 	set_capacity(dd->disk, capacity);
 
 	/* Enable the block device and add it to /dev */
-	device_add_disk(&dd->pdev->dev, dd->disk);
+	device_add_disk(&dd->pdev->dev, dd->disk, NULL);
 
 	dd->bdev = bdget_disk(dd->disk, 0);
 	/*
diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
index afe1508d82c6..29a4419e8ba3 100644
--- a/drivers/block/ps3disk.c
+++ b/drivers/block/ps3disk.c
@@ -500,7 +500,7 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
 		 gendisk->disk_name, priv->model, priv->raw_capacity >> 11,
 		 get_capacity(gendisk) >> 11);
 
-	device_add_disk(&dev->sbd.core, gendisk);
+	device_add_disk(&dev->sbd.core, gendisk, NULL);
 	return 0;
 
 fail_cleanup_queue:
diff --git a/drivers/block/ps3vram.c b/drivers/block/ps3vram.c
index 1e3d5de9d838..c0c50816a10b 100644
--- a/drivers/block/ps3vram.c
+++ b/drivers/block/ps3vram.c
@@ -769,7 +769,7 @@ static int ps3vram_probe(struct ps3_system_bus_device *dev)
 	dev_info(&dev->core, "%s: Using %lu MiB of GPU memory\n",
 		 gendisk->disk_name, get_capacity(gendisk) >> 11);
 
-	device_add_disk(&dev->core, gendisk);
+	device_add_disk(&dev->core, gendisk, NULL);
 	return 0;
 
 fail_cleanup_queue:
diff --git a/drivers/block/rsxx/dev.c b/drivers/block/rsxx/dev.c
index 1a92f9e65937..3894aa0f350b 100644
--- a/drivers/block/rsxx/dev.c
+++ b/drivers/block/rsxx/dev.c
@@ -226,7 +226,7 @@ int rsxx_attach_dev(struct rsxx_cardinfo *card)
 			set_capacity(card->gendisk, card->size8 >> 9);
 		else
 			set_capacity(card->gendisk, 0);
-		device_add_disk(CARD_TO_DEV(card), card->gendisk);
+		device_add_disk(CARD_TO_DEV(card), card->gendisk, NULL);
 		card->bdev_attached = 1;
 	}
 
diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 87b9e7fbf062..a85c9a622c41 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -3104,7 +3104,7 @@ static int skd_bdev_getgeo(struct block_device *bdev, struct hd_geometry *geo)
 static int skd_bdev_attach(struct device *parent, struct skd_device *skdev)
 {
 	dev_dbg(&skdev->pdev->dev, "add_disk\n");
-	device_add_disk(parent, skdev->disk);
+	device_add_disk(parent, skdev->disk, NULL);
 	return 0;
 }
 
diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c
index 5ca56bfae63c..09409edce384 100644
--- a/drivers/block/sunvdc.c
+++ b/drivers/block/sunvdc.c
@@ -850,7 +850,7 @@ static int probe_disk(struct vdc_port *port)
 	       port->vdisk_size, (port->vdisk_size >> (20 - 9)),
 	       port->vio.ver.major, port->vio.ver.minor);
 
-	device_add_disk(&port->vio.vdev->dev, g);
+	device_add_disk(&port->vio.vdev->dev, g, NULL);
 
 	return 0;
 }
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 23752dc99b00..fe80560000a1 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -780,7 +780,7 @@ static int virtblk_probe(struct virtio_device *vdev)
 	virtblk_update_capacity(vblk, false);
 	virtio_device_ready(vdev);
 
-	device_add_disk(&vdev->dev, vblk->disk);
+	device_add_disk(&vdev->dev, vblk->disk, NULL);
 	err = device_create_file(disk_to_dev(vblk->disk), &dev_attr_serial);
 	if (err)
 		goto out_del_disk;
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index a71d817e900d..e5e40272d233 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -2420,7 +2420,7 @@ static void blkfront_connect(struct blkfront_info *info)
 	for (i = 0; i < info->nr_rings; i++)
 		kick_pending_request_queues(&info->rinfo[i]);
 
-	device_add_disk(&info->xbdev->dev, info->gd);
+	device_add_disk(&info->xbdev->dev, info->gd, NULL);
 
 	info->is_ready = 1;
 	return;
diff --git a/drivers/ide/ide-cd.c b/drivers/ide/ide-cd.c
index 44a7a255ef74..f9b59d41813f 100644
--- a/drivers/ide/ide-cd.c
+++ b/drivers/ide/ide-cd.c
@@ -1784,7 +1784,7 @@ static int ide_cd_probe(ide_drive_t *drive)
 	ide_cd_read_toc(drive);
 	g->fops = &idecd_ops;
 	g->flags |= GENHD_FL_REMOVABLE | GENHD_FL_BLOCK_EVENTS_ON_EXCL_WRITE;
-	device_add_disk(&drive->gendev, g);
+	device_add_disk(&drive->gendev, g, NULL);
 	return 0;
 
 out_free_disk:
diff --git a/drivers/ide/ide-gd.c b/drivers/ide/ide-gd.c
index e823394ed543..04e008e8f6f9 100644
--- a/drivers/ide/ide-gd.c
+++ b/drivers/ide/ide-gd.c
@@ -416,7 +416,7 @@ static int ide_gd_probe(ide_drive_t *drive)
 	if (drive->dev_flags & IDE_DFLAG_REMOVABLE)
 		g->flags = GENHD_FL_REMOVABLE;
 	g->fops = &ide_gd_ops;
-	device_add_disk(&drive->gendev, g);
+	device_add_disk(&drive->gendev, g, NULL);
 	return 0;
 
 out_free_disk:
diff --git a/drivers/memstick/core/ms_block.c b/drivers/memstick/core/ms_block.c
index 716fc8ed31d3..8a02f11076f9 100644
--- a/drivers/memstick/core/ms_block.c
+++ b/drivers/memstick/core/ms_block.c
@@ -2146,7 +2146,7 @@ static int msb_init_disk(struct memstick_dev *card)
 		set_disk_ro(msb->disk, 1);
 
 	msb_start(card);
-	device_add_disk(&card->dev, msb->disk);
+	device_add_disk(&card->dev, msb->disk, NULL);
 	dbg("Disk added");
 	return 0;
 
diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c
index 5ee932631fae..0cd30dcb6801 100644
--- a/drivers/memstick/core/mspro_block.c
+++ b/drivers/memstick/core/mspro_block.c
@@ -1236,7 +1236,7 @@ static int mspro_block_init_disk(struct memstick_dev *card)
 	set_capacity(msb->disk, capacity);
 	dev_dbg(&card->dev, "capacity set %ld\n", capacity);
 
-	device_add_disk(&card->dev, msb->disk);
+	device_add_disk(&card->dev, msb->disk, NULL);
 	msb->active = 1;
 	return 0;
 
diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index a0b9102c4c6e..de8e1a8be690 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -2698,7 +2698,7 @@ static int mmc_add_disk(struct mmc_blk_data *md)
 	int ret;
 	struct mmc_card *card = md->queue.card;
 
-	device_add_disk(md->parent, md->disk);
+	device_add_disk(md->parent, md->disk, NULL);
 	md->force_ro.show = force_ro_show;
 	md->force_ro.store = force_ro_store;
 	sysfs_attr_init(&md->force_ro.attr);
diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
index 29c0bfd74e8a..6a41dfa3c36b 100644
--- a/drivers/mtd/mtd_blkdevs.c
+++ b/drivers/mtd/mtd_blkdevs.c
@@ -447,7 +447,7 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
 	if (new->readonly)
 		set_disk_ro(gd, 1);
 
-	device_add_disk(&new->mtd->dev, gd);
+	device_add_disk(&new->mtd->dev, gd, NULL);
 
 	if (new->disk_attributes) {
 		ret = sysfs_create_group(&disk_to_dev(gd)->kobj,
diff --git a/drivers/nvdimm/blk.c b/drivers/nvdimm/blk.c
index 62e9cb167aad..db45c6bbb7bb 100644
--- a/drivers/nvdimm/blk.c
+++ b/drivers/nvdimm/blk.c
@@ -290,7 +290,7 @@ static int nsblk_attach_disk(struct nd_namespace_blk *nsblk)
 	}
 
 	set_capacity(disk, available_disk_size >> SECTOR_SHIFT);
-	device_add_disk(dev, disk);
+	device_add_disk(dev, disk, NULL);
 	revalidate_disk(disk);
 	return 0;
 }
diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
index 0360c015f658..b123b0dcf274 100644
--- a/drivers/nvdimm/btt.c
+++ b/drivers/nvdimm/btt.c
@@ -1556,7 +1556,7 @@ static int btt_blk_init(struct btt *btt)
 		}
 	}
 	set_capacity(btt->btt_disk, btt->nlba * btt->sector_size >> 9);
-	device_add_disk(&btt->nd_btt->dev, btt->btt_disk);
+	device_add_disk(&btt->nd_btt->dev, btt->btt_disk, NULL);
 	btt->nd_btt->size = btt->nlba * (u64)btt->sector_size;
 	revalidate_disk(btt->btt_disk);
 
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 6071e2942053..a75d10c23d80 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -474,7 +474,7 @@ static int pmem_attach_disk(struct device *dev,
 	gendev = disk_to_dev(disk);
 	gendev->groups = pmem_attribute_groups;
 
-	device_add_disk(dev, disk);
+	device_add_disk(dev, disk, NULL);
 	if (devm_add_action_or_reset(dev, pmem_release_disk, pmem))
 		return -ENOMEM;
 
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index dd8ec1dd9219..0e824e8c8fd7 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3099,7 +3099,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 
 	nvme_get_ctrl(ctrl);
 
-	device_add_disk(ctrl->device, ns->disk);
+	device_add_disk(ctrl->device, ns->disk, NULL);
 	if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
 					&nvme_ns_id_attr_group))
 		pr_warn("%s: failed to create sysfs group for identification\n",
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 5a9562881d4e..477af51d01e8 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -283,7 +283,7 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
 		return;
 
 	if (!(head->disk->flags & GENHD_FL_UP)) {
-		device_add_disk(&head->subsys->dev, head->disk);
+		device_add_disk(&head->subsys->dev, head->disk, NULL);
 		if (sysfs_create_group(&disk_to_dev(head->disk)->kobj,
 				&nvme_ns_id_attr_group))
 			dev_warn(&head->subsys->dev,
diff --git a/drivers/s390/block/dasd_genhd.c b/drivers/s390/block/dasd_genhd.c
index 7036a6c6f86f..5542d9eadfe0 100644
--- a/drivers/s390/block/dasd_genhd.c
+++ b/drivers/s390/block/dasd_genhd.c
@@ -76,7 +76,7 @@ int dasd_gendisk_alloc(struct dasd_block *block)
 	gdp->queue = block->request_queue;
 	block->gdp = gdp;
 	set_capacity(block->gdp, 0);
-	device_add_disk(&base->cdev->dev, block->gdp);
+	device_add_disk(&base->cdev->dev, block->gdp, NULL);
 	return 0;
 }
 
diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c
index 23e526cda5c1..4e8aedd50cb0 100644
--- a/drivers/s390/block/dcssblk.c
+++ b/drivers/s390/block/dcssblk.c
@@ -685,7 +685,7 @@ dcssblk_add_store(struct device *dev, struct device_attribute *attr, const char
 	}
 
 	get_device(&dev_info->dev);
-	device_add_disk(&dev_info->dev, dev_info->gd);
+	device_add_disk(&dev_info->dev, dev_info->gd, NULL);
 
 	switch (dev_info->segment_type) {
 		case SEG_TYPE_SR:
diff --git a/drivers/s390/block/scm_blk.c b/drivers/s390/block/scm_blk.c
index 98f66b7b6794..e01889394c84 100644
--- a/drivers/s390/block/scm_blk.c
+++ b/drivers/s390/block/scm_blk.c
@@ -500,7 +500,7 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
 
 	/* 512 byte sectors */
 	set_capacity(bdev->gendisk, scmdev->size >> 9);
-	device_add_disk(&scmdev->dev, bdev->gendisk);
+	device_add_disk(&scmdev->dev, bdev->gendisk, NULL);
 	return 0;
 
 out_queue:
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index b79b366a94f7..b17b0fc07d85 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3271,7 +3271,7 @@ static void sd_probe_async(void *data, async_cookie_t cookie)
 	}
 
 	blk_pm_runtime_init(sdp->request_queue, dev);
-	device_add_disk(dev, gd);
+	device_add_disk(dev, gd, NULL);
 	if (sdkp->capacity)
 		sd_dif_config_host(sdkp);
 
diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
index d0389b20574d..93196083aebf 100644
--- a/drivers/scsi/sr.c
+++ b/drivers/scsi/sr.c
@@ -758,7 +758,7 @@ static int sr_probe(struct device *dev)
 
 	dev_set_drvdata(dev, cd);
 	disk->flags |= GENHD_FL_REMOVABLE;
-	device_add_disk(&sdev->sdev_gendev, disk);
+	device_add_disk(&sdev->sdev_gendev, disk, NULL);
 
 	sdev_printk(KERN_DEBUG, sdev,
 		    "Attached scsi CD-ROM %s\n", cd->cdi.name);
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 57864422a2c8..0b820ff05839 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -399,10 +399,11 @@ static inline void free_part_info(struct hd_struct *part)
 extern void part_round_stats(struct request_queue *q, int cpu, struct hd_struct *part);
 
 /* block/genhd.c */
-extern void device_add_disk(struct device *parent, struct gendisk *disk);
+extern void device_add_disk(struct device *parent, struct gendisk *disk,
+			    const struct attribute_group **groups);
 static inline void add_disk(struct gendisk *disk)
 {
-	device_add_disk(NULL, disk);
+	device_add_disk(NULL, disk, NULL);
 }
 extern void device_add_disk_no_queue_reg(struct device *parent, struct gendisk *disk);
 static inline void add_disk_no_queue_reg(struct gendisk *disk)
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 1/5] block: genhd: add 'groups' argument to device_add_disk
@ 2018-09-05  7:00   ` Hannes Reinecke
  0 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-09-05  7:00 UTC (permalink / raw)


Update device_add_disk() to take an 'groups' argument so that
individual drivers can register a device with additional sysfs
attributes.
This avoids race condition the driver would otherwise have if these
groups were to be created with sysfs_add_groups().

Signed-off-by: Martin Wilck <martin.wilck at suse.com>
Signed-off-by: Hannes Reinecke <hare at suse.com>
Reviewed-by: Christoph Hellwig <hch at lst.de>
---
 arch/um/drivers/ubd_kern.c          |  2 +-
 block/genhd.c                       | 19 ++++++++++++++-----
 drivers/block/floppy.c              |  2 +-
 drivers/block/mtip32xx/mtip32xx.c   |  2 +-
 drivers/block/ps3disk.c             |  2 +-
 drivers/block/ps3vram.c             |  2 +-
 drivers/block/rsxx/dev.c            |  2 +-
 drivers/block/skd_main.c            |  2 +-
 drivers/block/sunvdc.c              |  2 +-
 drivers/block/virtio_blk.c          |  2 +-
 drivers/block/xen-blkfront.c        |  2 +-
 drivers/ide/ide-cd.c                |  2 +-
 drivers/ide/ide-gd.c                |  2 +-
 drivers/memstick/core/ms_block.c    |  2 +-
 drivers/memstick/core/mspro_block.c |  2 +-
 drivers/mmc/core/block.c            |  2 +-
 drivers/mtd/mtd_blkdevs.c           |  2 +-
 drivers/nvdimm/blk.c                |  2 +-
 drivers/nvdimm/btt.c                |  2 +-
 drivers/nvdimm/pmem.c               |  2 +-
 drivers/nvme/host/core.c            |  2 +-
 drivers/nvme/host/multipath.c       |  2 +-
 drivers/s390/block/dasd_genhd.c     |  2 +-
 drivers/s390/block/dcssblk.c        |  2 +-
 drivers/s390/block/scm_blk.c        |  2 +-
 drivers/scsi/sd.c                   |  2 +-
 drivers/scsi/sr.c                   |  2 +-
 include/linux/genhd.h               |  5 +++--
 28 files changed, 43 insertions(+), 33 deletions(-)

diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
index 83c470364dfb..6ee4c56032f7 100644
--- a/arch/um/drivers/ubd_kern.c
+++ b/arch/um/drivers/ubd_kern.c
@@ -891,7 +891,7 @@ static int ubd_disk_register(int major, u64 size, int unit,
 
 	disk->private_data = &ubd_devs[unit];
 	disk->queue = ubd_devs[unit].queue;
-	device_add_disk(parent, disk);
+	device_add_disk(parent, disk, NULL);
 
 	*disk_out = disk;
 	return 0;
diff --git a/block/genhd.c b/block/genhd.c
index 8cc719a37b32..ef0936184d69 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -567,7 +567,8 @@ static int exact_lock(dev_t devt, void *data)
 	return 0;
 }
 
-static void register_disk(struct device *parent, struct gendisk *disk)
+static void register_disk(struct device *parent, struct gendisk *disk,
+			  const struct attribute_group **groups)
 {
 	struct device *ddev = disk_to_dev(disk);
 	struct block_device *bdev;
@@ -582,6 +583,10 @@ static void register_disk(struct device *parent, struct gendisk *disk)
 	/* delay uevents, until we scanned partition table */
 	dev_set_uevent_suppress(ddev, 1);
 
+	if (groups) {
+		WARN_ON(ddev->groups);
+		ddev->groups = groups;
+	}
 	if (device_add(ddev))
 		return;
 	if (!sysfs_deprecated) {
@@ -647,6 +652,7 @@ static void register_disk(struct device *parent, struct gendisk *disk)
  * __device_add_disk - add disk information to kernel list
  * @parent: parent device for the disk
  * @disk: per-device partitioning information
+ * @groups: Additional per-device sysfs groups
  * @register_queue: register the queue if set to true
  *
  * This function registers the partitioning information in @disk
@@ -655,6 +661,7 @@ static void register_disk(struct device *parent, struct gendisk *disk)
  * FIXME: error handling
  */
 static void __device_add_disk(struct device *parent, struct gendisk *disk,
+			      const struct attribute_group **groups,
 			      bool register_queue)
 {
 	dev_t devt;
@@ -698,7 +705,7 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
 		blk_register_region(disk_devt(disk), disk->minors, NULL,
 				    exact_match, exact_lock, disk);
 	}
-	register_disk(parent, disk);
+	register_disk(parent, disk, groups);
 	if (register_queue)
 		blk_register_queue(disk);
 
@@ -712,15 +719,17 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
 	blk_integrity_add(disk);
 }
 
-void device_add_disk(struct device *parent, struct gendisk *disk)
+void device_add_disk(struct device *parent, struct gendisk *disk,
+		     const struct attribute_group **groups)
+
 {
-	__device_add_disk(parent, disk, true);
+	__device_add_disk(parent, disk, groups, true);
 }
 EXPORT_SYMBOL(device_add_disk);
 
 void device_add_disk_no_queue_reg(struct device *parent, struct gendisk *disk)
 {
-	__device_add_disk(parent, disk, false);
+	__device_add_disk(parent, disk, NULL, false);
 }
 EXPORT_SYMBOL(device_add_disk_no_queue_reg);
 
diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
index 48f622728ce6..1bc99e9dfaee 100644
--- a/drivers/block/floppy.c
+++ b/drivers/block/floppy.c
@@ -4676,7 +4676,7 @@ static int __init do_floppy_init(void)
 		/* to be cleaned up... */
 		disks[drive]->private_data = (void *)(long)drive;
 		disks[drive]->flags |= GENHD_FL_REMOVABLE;
-		device_add_disk(&floppy_device[drive].dev, disks[drive]);
+		device_add_disk(&floppy_device[drive].dev, disks[drive], NULL);
 	}
 
 	return 0;
diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
index d0666f5ce003..1d7d48d8a205 100644
--- a/drivers/block/mtip32xx/mtip32xx.c
+++ b/drivers/block/mtip32xx/mtip32xx.c
@@ -3861,7 +3861,7 @@ static int mtip_block_initialize(struct driver_data *dd)
 	set_capacity(dd->disk, capacity);
 
 	/* Enable the block device and add it to /dev */
-	device_add_disk(&dd->pdev->dev, dd->disk);
+	device_add_disk(&dd->pdev->dev, dd->disk, NULL);
 
 	dd->bdev = bdget_disk(dd->disk, 0);
 	/*
diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
index afe1508d82c6..29a4419e8ba3 100644
--- a/drivers/block/ps3disk.c
+++ b/drivers/block/ps3disk.c
@@ -500,7 +500,7 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
 		 gendisk->disk_name, priv->model, priv->raw_capacity >> 11,
 		 get_capacity(gendisk) >> 11);
 
-	device_add_disk(&dev->sbd.core, gendisk);
+	device_add_disk(&dev->sbd.core, gendisk, NULL);
 	return 0;
 
 fail_cleanup_queue:
diff --git a/drivers/block/ps3vram.c b/drivers/block/ps3vram.c
index 1e3d5de9d838..c0c50816a10b 100644
--- a/drivers/block/ps3vram.c
+++ b/drivers/block/ps3vram.c
@@ -769,7 +769,7 @@ static int ps3vram_probe(struct ps3_system_bus_device *dev)
 	dev_info(&dev->core, "%s: Using %lu MiB of GPU memory\n",
 		 gendisk->disk_name, get_capacity(gendisk) >> 11);
 
-	device_add_disk(&dev->core, gendisk);
+	device_add_disk(&dev->core, gendisk, NULL);
 	return 0;
 
 fail_cleanup_queue:
diff --git a/drivers/block/rsxx/dev.c b/drivers/block/rsxx/dev.c
index 1a92f9e65937..3894aa0f350b 100644
--- a/drivers/block/rsxx/dev.c
+++ b/drivers/block/rsxx/dev.c
@@ -226,7 +226,7 @@ int rsxx_attach_dev(struct rsxx_cardinfo *card)
 			set_capacity(card->gendisk, card->size8 >> 9);
 		else
 			set_capacity(card->gendisk, 0);
-		device_add_disk(CARD_TO_DEV(card), card->gendisk);
+		device_add_disk(CARD_TO_DEV(card), card->gendisk, NULL);
 		card->bdev_attached = 1;
 	}
 
diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 87b9e7fbf062..a85c9a622c41 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -3104,7 +3104,7 @@ static int skd_bdev_getgeo(struct block_device *bdev, struct hd_geometry *geo)
 static int skd_bdev_attach(struct device *parent, struct skd_device *skdev)
 {
 	dev_dbg(&skdev->pdev->dev, "add_disk\n");
-	device_add_disk(parent, skdev->disk);
+	device_add_disk(parent, skdev->disk, NULL);
 	return 0;
 }
 
diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c
index 5ca56bfae63c..09409edce384 100644
--- a/drivers/block/sunvdc.c
+++ b/drivers/block/sunvdc.c
@@ -850,7 +850,7 @@ static int probe_disk(struct vdc_port *port)
 	       port->vdisk_size, (port->vdisk_size >> (20 - 9)),
 	       port->vio.ver.major, port->vio.ver.minor);
 
-	device_add_disk(&port->vio.vdev->dev, g);
+	device_add_disk(&port->vio.vdev->dev, g, NULL);
 
 	return 0;
 }
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 23752dc99b00..fe80560000a1 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -780,7 +780,7 @@ static int virtblk_probe(struct virtio_device *vdev)
 	virtblk_update_capacity(vblk, false);
 	virtio_device_ready(vdev);
 
-	device_add_disk(&vdev->dev, vblk->disk);
+	device_add_disk(&vdev->dev, vblk->disk, NULL);
 	err = device_create_file(disk_to_dev(vblk->disk), &dev_attr_serial);
 	if (err)
 		goto out_del_disk;
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index a71d817e900d..e5e40272d233 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -2420,7 +2420,7 @@ static void blkfront_connect(struct blkfront_info *info)
 	for (i = 0; i < info->nr_rings; i++)
 		kick_pending_request_queues(&info->rinfo[i]);
 
-	device_add_disk(&info->xbdev->dev, info->gd);
+	device_add_disk(&info->xbdev->dev, info->gd, NULL);
 
 	info->is_ready = 1;
 	return;
diff --git a/drivers/ide/ide-cd.c b/drivers/ide/ide-cd.c
index 44a7a255ef74..f9b59d41813f 100644
--- a/drivers/ide/ide-cd.c
+++ b/drivers/ide/ide-cd.c
@@ -1784,7 +1784,7 @@ static int ide_cd_probe(ide_drive_t *drive)
 	ide_cd_read_toc(drive);
 	g->fops = &idecd_ops;
 	g->flags |= GENHD_FL_REMOVABLE | GENHD_FL_BLOCK_EVENTS_ON_EXCL_WRITE;
-	device_add_disk(&drive->gendev, g);
+	device_add_disk(&drive->gendev, g, NULL);
 	return 0;
 
 out_free_disk:
diff --git a/drivers/ide/ide-gd.c b/drivers/ide/ide-gd.c
index e823394ed543..04e008e8f6f9 100644
--- a/drivers/ide/ide-gd.c
+++ b/drivers/ide/ide-gd.c
@@ -416,7 +416,7 @@ static int ide_gd_probe(ide_drive_t *drive)
 	if (drive->dev_flags & IDE_DFLAG_REMOVABLE)
 		g->flags = GENHD_FL_REMOVABLE;
 	g->fops = &ide_gd_ops;
-	device_add_disk(&drive->gendev, g);
+	device_add_disk(&drive->gendev, g, NULL);
 	return 0;
 
 out_free_disk:
diff --git a/drivers/memstick/core/ms_block.c b/drivers/memstick/core/ms_block.c
index 716fc8ed31d3..8a02f11076f9 100644
--- a/drivers/memstick/core/ms_block.c
+++ b/drivers/memstick/core/ms_block.c
@@ -2146,7 +2146,7 @@ static int msb_init_disk(struct memstick_dev *card)
 		set_disk_ro(msb->disk, 1);
 
 	msb_start(card);
-	device_add_disk(&card->dev, msb->disk);
+	device_add_disk(&card->dev, msb->disk, NULL);
 	dbg("Disk added");
 	return 0;
 
diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c
index 5ee932631fae..0cd30dcb6801 100644
--- a/drivers/memstick/core/mspro_block.c
+++ b/drivers/memstick/core/mspro_block.c
@@ -1236,7 +1236,7 @@ static int mspro_block_init_disk(struct memstick_dev *card)
 	set_capacity(msb->disk, capacity);
 	dev_dbg(&card->dev, "capacity set %ld\n", capacity);
 
-	device_add_disk(&card->dev, msb->disk);
+	device_add_disk(&card->dev, msb->disk, NULL);
 	msb->active = 1;
 	return 0;
 
diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index a0b9102c4c6e..de8e1a8be690 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -2698,7 +2698,7 @@ static int mmc_add_disk(struct mmc_blk_data *md)
 	int ret;
 	struct mmc_card *card = md->queue.card;
 
-	device_add_disk(md->parent, md->disk);
+	device_add_disk(md->parent, md->disk, NULL);
 	md->force_ro.show = force_ro_show;
 	md->force_ro.store = force_ro_store;
 	sysfs_attr_init(&md->force_ro.attr);
diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
index 29c0bfd74e8a..6a41dfa3c36b 100644
--- a/drivers/mtd/mtd_blkdevs.c
+++ b/drivers/mtd/mtd_blkdevs.c
@@ -447,7 +447,7 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
 	if (new->readonly)
 		set_disk_ro(gd, 1);
 
-	device_add_disk(&new->mtd->dev, gd);
+	device_add_disk(&new->mtd->dev, gd, NULL);
 
 	if (new->disk_attributes) {
 		ret = sysfs_create_group(&disk_to_dev(gd)->kobj,
diff --git a/drivers/nvdimm/blk.c b/drivers/nvdimm/blk.c
index 62e9cb167aad..db45c6bbb7bb 100644
--- a/drivers/nvdimm/blk.c
+++ b/drivers/nvdimm/blk.c
@@ -290,7 +290,7 @@ static int nsblk_attach_disk(struct nd_namespace_blk *nsblk)
 	}
 
 	set_capacity(disk, available_disk_size >> SECTOR_SHIFT);
-	device_add_disk(dev, disk);
+	device_add_disk(dev, disk, NULL);
 	revalidate_disk(disk);
 	return 0;
 }
diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
index 0360c015f658..b123b0dcf274 100644
--- a/drivers/nvdimm/btt.c
+++ b/drivers/nvdimm/btt.c
@@ -1556,7 +1556,7 @@ static int btt_blk_init(struct btt *btt)
 		}
 	}
 	set_capacity(btt->btt_disk, btt->nlba * btt->sector_size >> 9);
-	device_add_disk(&btt->nd_btt->dev, btt->btt_disk);
+	device_add_disk(&btt->nd_btt->dev, btt->btt_disk, NULL);
 	btt->nd_btt->size = btt->nlba * (u64)btt->sector_size;
 	revalidate_disk(btt->btt_disk);
 
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 6071e2942053..a75d10c23d80 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -474,7 +474,7 @@ static int pmem_attach_disk(struct device *dev,
 	gendev = disk_to_dev(disk);
 	gendev->groups = pmem_attribute_groups;
 
-	device_add_disk(dev, disk);
+	device_add_disk(dev, disk, NULL);
 	if (devm_add_action_or_reset(dev, pmem_release_disk, pmem))
 		return -ENOMEM;
 
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index dd8ec1dd9219..0e824e8c8fd7 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3099,7 +3099,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 
 	nvme_get_ctrl(ctrl);
 
-	device_add_disk(ctrl->device, ns->disk);
+	device_add_disk(ctrl->device, ns->disk, NULL);
 	if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
 					&nvme_ns_id_attr_group))
 		pr_warn("%s: failed to create sysfs group for identification\n",
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 5a9562881d4e..477af51d01e8 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -283,7 +283,7 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
 		return;
 
 	if (!(head->disk->flags & GENHD_FL_UP)) {
-		device_add_disk(&head->subsys->dev, head->disk);
+		device_add_disk(&head->subsys->dev, head->disk, NULL);
 		if (sysfs_create_group(&disk_to_dev(head->disk)->kobj,
 				&nvme_ns_id_attr_group))
 			dev_warn(&head->subsys->dev,
diff --git a/drivers/s390/block/dasd_genhd.c b/drivers/s390/block/dasd_genhd.c
index 7036a6c6f86f..5542d9eadfe0 100644
--- a/drivers/s390/block/dasd_genhd.c
+++ b/drivers/s390/block/dasd_genhd.c
@@ -76,7 +76,7 @@ int dasd_gendisk_alloc(struct dasd_block *block)
 	gdp->queue = block->request_queue;
 	block->gdp = gdp;
 	set_capacity(block->gdp, 0);
-	device_add_disk(&base->cdev->dev, block->gdp);
+	device_add_disk(&base->cdev->dev, block->gdp, NULL);
 	return 0;
 }
 
diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c
index 23e526cda5c1..4e8aedd50cb0 100644
--- a/drivers/s390/block/dcssblk.c
+++ b/drivers/s390/block/dcssblk.c
@@ -685,7 +685,7 @@ dcssblk_add_store(struct device *dev, struct device_attribute *attr, const char
 	}
 
 	get_device(&dev_info->dev);
-	device_add_disk(&dev_info->dev, dev_info->gd);
+	device_add_disk(&dev_info->dev, dev_info->gd, NULL);
 
 	switch (dev_info->segment_type) {
 		case SEG_TYPE_SR:
diff --git a/drivers/s390/block/scm_blk.c b/drivers/s390/block/scm_blk.c
index 98f66b7b6794..e01889394c84 100644
--- a/drivers/s390/block/scm_blk.c
+++ b/drivers/s390/block/scm_blk.c
@@ -500,7 +500,7 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
 
 	/* 512 byte sectors */
 	set_capacity(bdev->gendisk, scmdev->size >> 9);
-	device_add_disk(&scmdev->dev, bdev->gendisk);
+	device_add_disk(&scmdev->dev, bdev->gendisk, NULL);
 	return 0;
 
 out_queue:
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index b79b366a94f7..b17b0fc07d85 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3271,7 +3271,7 @@ static void sd_probe_async(void *data, async_cookie_t cookie)
 	}
 
 	blk_pm_runtime_init(sdp->request_queue, dev);
-	device_add_disk(dev, gd);
+	device_add_disk(dev, gd, NULL);
 	if (sdkp->capacity)
 		sd_dif_config_host(sdkp);
 
diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
index d0389b20574d..93196083aebf 100644
--- a/drivers/scsi/sr.c
+++ b/drivers/scsi/sr.c
@@ -758,7 +758,7 @@ static int sr_probe(struct device *dev)
 
 	dev_set_drvdata(dev, cd);
 	disk->flags |= GENHD_FL_REMOVABLE;
-	device_add_disk(&sdev->sdev_gendev, disk);
+	device_add_disk(&sdev->sdev_gendev, disk, NULL);
 
 	sdev_printk(KERN_DEBUG, sdev,
 		    "Attached scsi CD-ROM %s\n", cd->cdi.name);
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 57864422a2c8..0b820ff05839 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -399,10 +399,11 @@ static inline void free_part_info(struct hd_struct *part)
 extern void part_round_stats(struct request_queue *q, int cpu, struct hd_struct *part);
 
 /* block/genhd.c */
-extern void device_add_disk(struct device *parent, struct gendisk *disk);
+extern void device_add_disk(struct device *parent, struct gendisk *disk,
+			    const struct attribute_group **groups);
 static inline void add_disk(struct gendisk *disk)
 {
-	device_add_disk(NULL, disk);
+	device_add_disk(NULL, disk, NULL);
 }
 extern void device_add_disk_no_queue_reg(struct device *parent, struct gendisk *disk);
 static inline void add_disk_no_queue_reg(struct gendisk *disk)
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-09-05  7:00 ` Hannes Reinecke
@ 2018-09-05  7:00   ` Hannes Reinecke
  -1 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-09-05  7:00 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Sagi Grimberg, Keith Busch, Bart van Assche,
	linux-block, linux-nvme, Hannes Reinecke, Hannes Reinecke

We should be registering the ns_id attribute as default sysfs
attribute groups, otherwise we have a race condition between
the uevent and the attributes appearing in sysfs.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/nvme/host/core.c      |  21 ++++-----
 drivers/nvme/host/lightnvm.c  | 106 +++++++++++++++++-------------------------
 drivers/nvme/host/multipath.c |  15 ++----
 drivers/nvme/host/nvme.h      |  10 +---
 4 files changed, 58 insertions(+), 94 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 0e824e8c8fd7..e0a9e1c5b30e 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2734,6 +2734,14 @@ const struct attribute_group nvme_ns_id_attr_group = {
 	.is_visible	= nvme_ns_id_attrs_are_visible,
 };
 
+const struct attribute_group *nvme_ns_id_attr_groups[] = {
+	&nvme_ns_id_attr_group,
+#ifdef CONFIG_NVM
+	&nvme_nvm_attr_group,
+#endif
+	NULL,
+};
+
 #define nvme_show_str_function(field)						\
 static ssize_t  field##_show(struct device *dev,				\
 			    struct device_attribute *attr, char *buf)		\
@@ -3099,14 +3107,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 
 	nvme_get_ctrl(ctrl);
 
-	device_add_disk(ctrl->device, ns->disk, NULL);
-	if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvme_ns_id_attr_group))
-		pr_warn("%s: failed to create sysfs group for identification\n",
-			ns->disk->disk_name);
-	if (ns->ndev && nvme_nvm_register_sysfs(ns))
-		pr_warn("%s: failed to register lightnvm sysfs group for identification\n",
-			ns->disk->disk_name);
+	device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
 
 	nvme_mpath_add_disk(ns, id);
 	nvme_fault_inject_init(ns);
@@ -3132,10 +3133,6 @@ static void nvme_ns_remove(struct nvme_ns *ns)
 
 	nvme_fault_inject_fini(ns);
 	if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
-		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
-					&nvme_ns_id_attr_group);
-		if (ns->ndev)
-			nvme_nvm_unregister_sysfs(ns);
 		del_gendisk(ns->disk);
 		blk_cleanup_queue(ns->queue);
 		if (blk_get_integrity(ns->disk))
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 6fe5923c95d4..941ce5fc48f5 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -1190,10 +1190,29 @@ static NVM_DEV_ATTR_12_RO(multiplane_modes);
 static NVM_DEV_ATTR_12_RO(media_capabilities);
 static NVM_DEV_ATTR_12_RO(max_phys_secs);
 
-static struct attribute *nvm_dev_attrs_12[] = {
+/* 2.0 values */
+static NVM_DEV_ATTR_20_RO(groups);
+static NVM_DEV_ATTR_20_RO(punits);
+static NVM_DEV_ATTR_20_RO(chunks);
+static NVM_DEV_ATTR_20_RO(clba);
+static NVM_DEV_ATTR_20_RO(ws_min);
+static NVM_DEV_ATTR_20_RO(ws_opt);
+static NVM_DEV_ATTR_20_RO(maxoc);
+static NVM_DEV_ATTR_20_RO(maxocpu);
+static NVM_DEV_ATTR_20_RO(mw_cunits);
+static NVM_DEV_ATTR_20_RO(write_typ);
+static NVM_DEV_ATTR_20_RO(write_max);
+static NVM_DEV_ATTR_20_RO(reset_typ);
+static NVM_DEV_ATTR_20_RO(reset_max);
+
+static struct attribute *nvm_dev_attrs[] = {
+	/* version agnostic attrs */
 	&dev_attr_version.attr,
 	&dev_attr_capabilities.attr,
+	&dev_attr_read_typ.attr,
+	&dev_attr_read_max.attr,
 
+	/* 1.2 attrs */
 	&dev_attr_vendor_opcode.attr,
 	&dev_attr_device_mode.attr,
 	&dev_attr_media_manager.attr,
@@ -1208,8 +1227,6 @@ static struct attribute *nvm_dev_attrs_12[] = {
 	&dev_attr_page_size.attr,
 	&dev_attr_hw_sector_size.attr,
 	&dev_attr_oob_sector_size.attr,
-	&dev_attr_read_typ.attr,
-	&dev_attr_read_max.attr,
 	&dev_attr_prog_typ.attr,
 	&dev_attr_prog_max.attr,
 	&dev_attr_erase_typ.attr,
@@ -1218,33 +1235,7 @@ static struct attribute *nvm_dev_attrs_12[] = {
 	&dev_attr_media_capabilities.attr,
 	&dev_attr_max_phys_secs.attr,
 
-	NULL,
-};
-
-static const struct attribute_group nvm_dev_attr_group_12 = {
-	.name		= "lightnvm",
-	.attrs		= nvm_dev_attrs_12,
-};
-
-/* 2.0 values */
-static NVM_DEV_ATTR_20_RO(groups);
-static NVM_DEV_ATTR_20_RO(punits);
-static NVM_DEV_ATTR_20_RO(chunks);
-static NVM_DEV_ATTR_20_RO(clba);
-static NVM_DEV_ATTR_20_RO(ws_min);
-static NVM_DEV_ATTR_20_RO(ws_opt);
-static NVM_DEV_ATTR_20_RO(maxoc);
-static NVM_DEV_ATTR_20_RO(maxocpu);
-static NVM_DEV_ATTR_20_RO(mw_cunits);
-static NVM_DEV_ATTR_20_RO(write_typ);
-static NVM_DEV_ATTR_20_RO(write_max);
-static NVM_DEV_ATTR_20_RO(reset_typ);
-static NVM_DEV_ATTR_20_RO(reset_max);
-
-static struct attribute *nvm_dev_attrs_20[] = {
-	&dev_attr_version.attr,
-	&dev_attr_capabilities.attr,
-
+	/* 2.0 attrs */
 	&dev_attr_groups.attr,
 	&dev_attr_punits.attr,
 	&dev_attr_chunks.attr,
@@ -1255,8 +1246,6 @@ static struct attribute *nvm_dev_attrs_20[] = {
 	&dev_attr_maxocpu.attr,
 	&dev_attr_mw_cunits.attr,
 
-	&dev_attr_read_typ.attr,
-	&dev_attr_read_max.attr,
 	&dev_attr_write_typ.attr,
 	&dev_attr_write_max.attr,
 	&dev_attr_reset_typ.attr,
@@ -1265,44 +1254,35 @@ static struct attribute *nvm_dev_attrs_20[] = {
 	NULL,
 };
 
-static const struct attribute_group nvm_dev_attr_group_20 = {
-	.name		= "lightnvm",
-	.attrs		= nvm_dev_attrs_20,
-};
-
-int nvme_nvm_register_sysfs(struct nvme_ns *ns)
+static umode_t nvm_dev_attrs_visible(struct kobject *kobj,
+				     struct attribute *attr, int index)
 {
+	struct device *dev = container_of(kobj, struct device, kobj);
+	struct gendisk *disk = dev_to_disk(dev);
+	struct nvme_ns *ns = disk->private_data;
 	struct nvm_dev *ndev = ns->ndev;
-	struct nvm_geo *geo = &ndev->geo;
+	struct device_attribute *dev_attr =
+		container_of(attr, typeof(*dev_attr), attr);
 
-	if (!ndev)
-		return -EINVAL;
-
-	switch (geo->major_ver_id) {
-	case 1:
-		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_12);
-	case 2:
-		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_20);
-	}
-
-	return -EINVAL;
-}
-
-void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
-{
-	struct nvm_dev *ndev = ns->ndev;
-	struct nvm_geo *geo = &ndev->geo;
+	if (dev_attr->show == nvm_dev_attr_show)
+		return attr->mode;
 
-	switch (geo->major_ver_id) {
+	switch (ndev ? ndev->geo.major_ver_id : 0) {
 	case 1:
-		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_12);
+		if (dev_attr->show == nvm_dev_attr_show_12)
+			return attr->mode;
 		break;
 	case 2:
-		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_20);
+		if (dev_attr->show == nvm_dev_attr_show_20)
+			return attr->mode;
 		break;
 	}
+
+	return 0;
 }
+
+const struct attribute_group nvme_nvm_attr_group = {
+	.name		= "lightnvm",
+	.attrs		= nvm_dev_attrs,
+	.is_visible	= nvm_dev_attrs_visible,
+};
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 477af51d01e8..8e846095c42d 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -282,13 +282,9 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
 	if (!head->disk)
 		return;
 
-	if (!(head->disk->flags & GENHD_FL_UP)) {
-		device_add_disk(&head->subsys->dev, head->disk, NULL);
-		if (sysfs_create_group(&disk_to_dev(head->disk)->kobj,
-				&nvme_ns_id_attr_group))
-			dev_warn(&head->subsys->dev,
-				 "failed to create id group.\n");
-	}
+	if (!(head->disk->flags & GENHD_FL_UP))
+		device_add_disk(&head->subsys->dev, head->disk,
+				nvme_ns_id_attr_groups);
 
 	kblockd_schedule_work(&ns->head->requeue_work);
 }
@@ -494,11 +490,8 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
 {
 	if (!head->disk)
 		return;
-	if (head->disk->flags & GENHD_FL_UP) {
-		sysfs_remove_group(&disk_to_dev(head->disk)->kobj,
-				   &nvme_ns_id_attr_group);
+	if (head->disk->flags & GENHD_FL_UP)
 		del_gendisk(head->disk);
-	}
 	blk_set_queue_dying(head->disk->queue);
 	/* make sure all pending bios are cleaned up */
 	kblockd_schedule_work(&head->requeue_work);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index bb4a2003c097..2503f8fd54da 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -459,7 +459,7 @@ int nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl);
 int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp,
 		void *log, size_t size, u64 offset);
 
-extern const struct attribute_group nvme_ns_id_attr_group;
+extern const struct attribute_group *nvme_ns_id_attr_groups[];
 extern const struct block_device_operations nvme_ns_head_ops;
 
 #ifdef CONFIG_NVME_MULTIPATH
@@ -551,8 +551,7 @@ static inline void nvme_mpath_stop(struct nvme_ctrl *ctrl)
 void nvme_nvm_update_nvm_info(struct nvme_ns *ns);
 int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
 void nvme_nvm_unregister(struct nvme_ns *ns);
-int nvme_nvm_register_sysfs(struct nvme_ns *ns);
-void nvme_nvm_unregister_sysfs(struct nvme_ns *ns);
+extern const struct attribute_group nvme_nvm_attr_group;
 int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg);
 #else
 static inline void nvme_nvm_update_nvm_info(struct nvme_ns *ns) {};
@@ -563,11 +562,6 @@ static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
 }
 
 static inline void nvme_nvm_unregister(struct nvme_ns *ns) {};
-static inline int nvme_nvm_register_sysfs(struct nvme_ns *ns)
-{
-	return 0;
-}
-static inline void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) {};
 static inline int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd,
 							unsigned long arg)
 {
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-09-05  7:00   ` Hannes Reinecke
  0 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-09-05  7:00 UTC (permalink / raw)


We should be registering the ns_id attribute as default sysfs
attribute groups, otherwise we have a race condition between
the uevent and the attributes appearing in sysfs.

Signed-off-by: Hannes Reinecke <hare at suse.com>
---
 drivers/nvme/host/core.c      |  21 ++++-----
 drivers/nvme/host/lightnvm.c  | 106 +++++++++++++++++-------------------------
 drivers/nvme/host/multipath.c |  15 ++----
 drivers/nvme/host/nvme.h      |  10 +---
 4 files changed, 58 insertions(+), 94 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 0e824e8c8fd7..e0a9e1c5b30e 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2734,6 +2734,14 @@ const struct attribute_group nvme_ns_id_attr_group = {
 	.is_visible	= nvme_ns_id_attrs_are_visible,
 };
 
+const struct attribute_group *nvme_ns_id_attr_groups[] = {
+	&nvme_ns_id_attr_group,
+#ifdef CONFIG_NVM
+	&nvme_nvm_attr_group,
+#endif
+	NULL,
+};
+
 #define nvme_show_str_function(field)						\
 static ssize_t  field##_show(struct device *dev,				\
 			    struct device_attribute *attr, char *buf)		\
@@ -3099,14 +3107,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 
 	nvme_get_ctrl(ctrl);
 
-	device_add_disk(ctrl->device, ns->disk, NULL);
-	if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvme_ns_id_attr_group))
-		pr_warn("%s: failed to create sysfs group for identification\n",
-			ns->disk->disk_name);
-	if (ns->ndev && nvme_nvm_register_sysfs(ns))
-		pr_warn("%s: failed to register lightnvm sysfs group for identification\n",
-			ns->disk->disk_name);
+	device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
 
 	nvme_mpath_add_disk(ns, id);
 	nvme_fault_inject_init(ns);
@@ -3132,10 +3133,6 @@ static void nvme_ns_remove(struct nvme_ns *ns)
 
 	nvme_fault_inject_fini(ns);
 	if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
-		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
-					&nvme_ns_id_attr_group);
-		if (ns->ndev)
-			nvme_nvm_unregister_sysfs(ns);
 		del_gendisk(ns->disk);
 		blk_cleanup_queue(ns->queue);
 		if (blk_get_integrity(ns->disk))
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 6fe5923c95d4..941ce5fc48f5 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -1190,10 +1190,29 @@ static NVM_DEV_ATTR_12_RO(multiplane_modes);
 static NVM_DEV_ATTR_12_RO(media_capabilities);
 static NVM_DEV_ATTR_12_RO(max_phys_secs);
 
-static struct attribute *nvm_dev_attrs_12[] = {
+/* 2.0 values */
+static NVM_DEV_ATTR_20_RO(groups);
+static NVM_DEV_ATTR_20_RO(punits);
+static NVM_DEV_ATTR_20_RO(chunks);
+static NVM_DEV_ATTR_20_RO(clba);
+static NVM_DEV_ATTR_20_RO(ws_min);
+static NVM_DEV_ATTR_20_RO(ws_opt);
+static NVM_DEV_ATTR_20_RO(maxoc);
+static NVM_DEV_ATTR_20_RO(maxocpu);
+static NVM_DEV_ATTR_20_RO(mw_cunits);
+static NVM_DEV_ATTR_20_RO(write_typ);
+static NVM_DEV_ATTR_20_RO(write_max);
+static NVM_DEV_ATTR_20_RO(reset_typ);
+static NVM_DEV_ATTR_20_RO(reset_max);
+
+static struct attribute *nvm_dev_attrs[] = {
+	/* version agnostic attrs */
 	&dev_attr_version.attr,
 	&dev_attr_capabilities.attr,
+	&dev_attr_read_typ.attr,
+	&dev_attr_read_max.attr,
 
+	/* 1.2 attrs */
 	&dev_attr_vendor_opcode.attr,
 	&dev_attr_device_mode.attr,
 	&dev_attr_media_manager.attr,
@@ -1208,8 +1227,6 @@ static struct attribute *nvm_dev_attrs_12[] = {
 	&dev_attr_page_size.attr,
 	&dev_attr_hw_sector_size.attr,
 	&dev_attr_oob_sector_size.attr,
-	&dev_attr_read_typ.attr,
-	&dev_attr_read_max.attr,
 	&dev_attr_prog_typ.attr,
 	&dev_attr_prog_max.attr,
 	&dev_attr_erase_typ.attr,
@@ -1218,33 +1235,7 @@ static struct attribute *nvm_dev_attrs_12[] = {
 	&dev_attr_media_capabilities.attr,
 	&dev_attr_max_phys_secs.attr,
 
-	NULL,
-};
-
-static const struct attribute_group nvm_dev_attr_group_12 = {
-	.name		= "lightnvm",
-	.attrs		= nvm_dev_attrs_12,
-};
-
-/* 2.0 values */
-static NVM_DEV_ATTR_20_RO(groups);
-static NVM_DEV_ATTR_20_RO(punits);
-static NVM_DEV_ATTR_20_RO(chunks);
-static NVM_DEV_ATTR_20_RO(clba);
-static NVM_DEV_ATTR_20_RO(ws_min);
-static NVM_DEV_ATTR_20_RO(ws_opt);
-static NVM_DEV_ATTR_20_RO(maxoc);
-static NVM_DEV_ATTR_20_RO(maxocpu);
-static NVM_DEV_ATTR_20_RO(mw_cunits);
-static NVM_DEV_ATTR_20_RO(write_typ);
-static NVM_DEV_ATTR_20_RO(write_max);
-static NVM_DEV_ATTR_20_RO(reset_typ);
-static NVM_DEV_ATTR_20_RO(reset_max);
-
-static struct attribute *nvm_dev_attrs_20[] = {
-	&dev_attr_version.attr,
-	&dev_attr_capabilities.attr,
-
+	/* 2.0 attrs */
 	&dev_attr_groups.attr,
 	&dev_attr_punits.attr,
 	&dev_attr_chunks.attr,
@@ -1255,8 +1246,6 @@ static struct attribute *nvm_dev_attrs_20[] = {
 	&dev_attr_maxocpu.attr,
 	&dev_attr_mw_cunits.attr,
 
-	&dev_attr_read_typ.attr,
-	&dev_attr_read_max.attr,
 	&dev_attr_write_typ.attr,
 	&dev_attr_write_max.attr,
 	&dev_attr_reset_typ.attr,
@@ -1265,44 +1254,35 @@ static struct attribute *nvm_dev_attrs_20[] = {
 	NULL,
 };
 
-static const struct attribute_group nvm_dev_attr_group_20 = {
-	.name		= "lightnvm",
-	.attrs		= nvm_dev_attrs_20,
-};
-
-int nvme_nvm_register_sysfs(struct nvme_ns *ns)
+static umode_t nvm_dev_attrs_visible(struct kobject *kobj,
+				     struct attribute *attr, int index)
 {
+	struct device *dev = container_of(kobj, struct device, kobj);
+	struct gendisk *disk = dev_to_disk(dev);
+	struct nvme_ns *ns = disk->private_data;
 	struct nvm_dev *ndev = ns->ndev;
-	struct nvm_geo *geo = &ndev->geo;
+	struct device_attribute *dev_attr =
+		container_of(attr, typeof(*dev_attr), attr);
 
-	if (!ndev)
-		return -EINVAL;
-
-	switch (geo->major_ver_id) {
-	case 1:
-		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_12);
-	case 2:
-		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_20);
-	}
-
-	return -EINVAL;
-}
-
-void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
-{
-	struct nvm_dev *ndev = ns->ndev;
-	struct nvm_geo *geo = &ndev->geo;
+	if (dev_attr->show == nvm_dev_attr_show)
+		return attr->mode;
 
-	switch (geo->major_ver_id) {
+	switch (ndev ? ndev->geo.major_ver_id : 0) {
 	case 1:
-		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_12);
+		if (dev_attr->show == nvm_dev_attr_show_12)
+			return attr->mode;
 		break;
 	case 2:
-		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_20);
+		if (dev_attr->show == nvm_dev_attr_show_20)
+			return attr->mode;
 		break;
 	}
+
+	return 0;
 }
+
+const struct attribute_group nvme_nvm_attr_group = {
+	.name		= "lightnvm",
+	.attrs		= nvm_dev_attrs,
+	.is_visible	= nvm_dev_attrs_visible,
+};
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 477af51d01e8..8e846095c42d 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -282,13 +282,9 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
 	if (!head->disk)
 		return;
 
-	if (!(head->disk->flags & GENHD_FL_UP)) {
-		device_add_disk(&head->subsys->dev, head->disk, NULL);
-		if (sysfs_create_group(&disk_to_dev(head->disk)->kobj,
-				&nvme_ns_id_attr_group))
-			dev_warn(&head->subsys->dev,
-				 "failed to create id group.\n");
-	}
+	if (!(head->disk->flags & GENHD_FL_UP))
+		device_add_disk(&head->subsys->dev, head->disk,
+				nvme_ns_id_attr_groups);
 
 	kblockd_schedule_work(&ns->head->requeue_work);
 }
@@ -494,11 +490,8 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
 {
 	if (!head->disk)
 		return;
-	if (head->disk->flags & GENHD_FL_UP) {
-		sysfs_remove_group(&disk_to_dev(head->disk)->kobj,
-				   &nvme_ns_id_attr_group);
+	if (head->disk->flags & GENHD_FL_UP)
 		del_gendisk(head->disk);
-	}
 	blk_set_queue_dying(head->disk->queue);
 	/* make sure all pending bios are cleaned up */
 	kblockd_schedule_work(&head->requeue_work);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index bb4a2003c097..2503f8fd54da 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -459,7 +459,7 @@ int nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl);
 int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp,
 		void *log, size_t size, u64 offset);
 
-extern const struct attribute_group nvme_ns_id_attr_group;
+extern const struct attribute_group *nvme_ns_id_attr_groups[];
 extern const struct block_device_operations nvme_ns_head_ops;
 
 #ifdef CONFIG_NVME_MULTIPATH
@@ -551,8 +551,7 @@ static inline void nvme_mpath_stop(struct nvme_ctrl *ctrl)
 void nvme_nvm_update_nvm_info(struct nvme_ns *ns);
 int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
 void nvme_nvm_unregister(struct nvme_ns *ns);
-int nvme_nvm_register_sysfs(struct nvme_ns *ns);
-void nvme_nvm_unregister_sysfs(struct nvme_ns *ns);
+extern const struct attribute_group nvme_nvm_attr_group;
 int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg);
 #else
 static inline void nvme_nvm_update_nvm_info(struct nvme_ns *ns) {};
@@ -563,11 +562,6 @@ static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
 }
 
 static inline void nvme_nvm_unregister(struct nvme_ns *ns) {};
-static inline int nvme_nvm_register_sysfs(struct nvme_ns *ns)
-{
-	return 0;
-}
-static inline void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) {};
 static inline int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd,
 							unsigned long arg)
 {
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 3/5] aoe: use device_add_disk_with_groups()
  2018-09-05  7:00 ` Hannes Reinecke
@ 2018-09-05  7:00   ` Hannes Reinecke
  -1 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-09-05  7:00 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Sagi Grimberg, Keith Busch, Bart van Assche,
	linux-block, linux-nvme, Hannes Reinecke, Hannes Reinecke

Use device_add_disk_with_groups() to avoid a race condition with
udev during startup.

Signed-off-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Ed L. Cachin <ed.cashin@acm.org>
Reviewed-by: Bart Van Assche <bart.vanassche@wdc.com>
---
 drivers/block/aoe/aoe.h    |  1 -
 drivers/block/aoe/aoeblk.c | 21 +++++++--------------
 drivers/block/aoe/aoedev.c |  1 -
 3 files changed, 7 insertions(+), 16 deletions(-)

diff --git a/drivers/block/aoe/aoe.h b/drivers/block/aoe/aoe.h
index c0ebda1283cc..015c68017a1c 100644
--- a/drivers/block/aoe/aoe.h
+++ b/drivers/block/aoe/aoe.h
@@ -201,7 +201,6 @@ int aoeblk_init(void);
 void aoeblk_exit(void);
 void aoeblk_gdalloc(void *);
 void aoedisk_rm_debugfs(struct aoedev *d);
-void aoedisk_rm_sysfs(struct aoedev *d);
 
 int aoechr_init(void);
 void aoechr_exit(void);
diff --git a/drivers/block/aoe/aoeblk.c b/drivers/block/aoe/aoeblk.c
index 429ebb84b592..ff770e7d9e52 100644
--- a/drivers/block/aoe/aoeblk.c
+++ b/drivers/block/aoe/aoeblk.c
@@ -177,10 +177,15 @@ static struct attribute *aoe_attrs[] = {
 	NULL,
 };
 
-static const struct attribute_group attr_group = {
+static const struct attribute_group aoe_attr_group = {
 	.attrs = aoe_attrs,
 };
 
+static const struct attribute_group *aoe_attr_groups[] = {
+	&aoe_attr_group,
+	NULL,
+};
+
 static const struct file_operations aoe_debugfs_fops = {
 	.open = aoe_debugfs_open,
 	.read = seq_read,
@@ -219,17 +224,6 @@ aoedisk_rm_debugfs(struct aoedev *d)
 	d->debugfs = NULL;
 }
 
-static int
-aoedisk_add_sysfs(struct aoedev *d)
-{
-	return sysfs_create_group(&disk_to_dev(d->gd)->kobj, &attr_group);
-}
-void
-aoedisk_rm_sysfs(struct aoedev *d)
-{
-	sysfs_remove_group(&disk_to_dev(d->gd)->kobj, &attr_group);
-}
-
 static int
 aoeblk_open(struct block_device *bdev, fmode_t mode)
 {
@@ -417,8 +411,7 @@ aoeblk_gdalloc(void *vp)
 
 	spin_unlock_irqrestore(&d->lock, flags);
 
-	add_disk(gd);
-	aoedisk_add_sysfs(d);
+	device_add_disk(NULL, gd, aoe_attr_groups);
 	aoedisk_add_debugfs(d);
 
 	spin_lock_irqsave(&d->lock, flags);
diff --git a/drivers/block/aoe/aoedev.c b/drivers/block/aoe/aoedev.c
index 41060e9cedf2..f29a140cdbc1 100644
--- a/drivers/block/aoe/aoedev.c
+++ b/drivers/block/aoe/aoedev.c
@@ -275,7 +275,6 @@ freedev(struct aoedev *d)
 	del_timer_sync(&d->timer);
 	if (d->gd) {
 		aoedisk_rm_debugfs(d);
-		aoedisk_rm_sysfs(d);
 		del_gendisk(d->gd);
 		put_disk(d->gd);
 		blk_cleanup_queue(d->blkq);
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 3/5] aoe: use device_add_disk_with_groups()
@ 2018-09-05  7:00   ` Hannes Reinecke
  0 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-09-05  7:00 UTC (permalink / raw)


Use device_add_disk_with_groups() to avoid a race condition with
udev during startup.

Signed-off-by: Hannes Reinecke <hare at suse.com>
Reviewed-by: Christoph Hellwig <hch at lst.de>
Acked-by: Ed L. Cachin <ed.cashin at acm.org>
Reviewed-by: Bart Van Assche <bart.vanassche at wdc.com>
---
 drivers/block/aoe/aoe.h    |  1 -
 drivers/block/aoe/aoeblk.c | 21 +++++++--------------
 drivers/block/aoe/aoedev.c |  1 -
 3 files changed, 7 insertions(+), 16 deletions(-)

diff --git a/drivers/block/aoe/aoe.h b/drivers/block/aoe/aoe.h
index c0ebda1283cc..015c68017a1c 100644
--- a/drivers/block/aoe/aoe.h
+++ b/drivers/block/aoe/aoe.h
@@ -201,7 +201,6 @@ int aoeblk_init(void);
 void aoeblk_exit(void);
 void aoeblk_gdalloc(void *);
 void aoedisk_rm_debugfs(struct aoedev *d);
-void aoedisk_rm_sysfs(struct aoedev *d);
 
 int aoechr_init(void);
 void aoechr_exit(void);
diff --git a/drivers/block/aoe/aoeblk.c b/drivers/block/aoe/aoeblk.c
index 429ebb84b592..ff770e7d9e52 100644
--- a/drivers/block/aoe/aoeblk.c
+++ b/drivers/block/aoe/aoeblk.c
@@ -177,10 +177,15 @@ static struct attribute *aoe_attrs[] = {
 	NULL,
 };
 
-static const struct attribute_group attr_group = {
+static const struct attribute_group aoe_attr_group = {
 	.attrs = aoe_attrs,
 };
 
+static const struct attribute_group *aoe_attr_groups[] = {
+	&aoe_attr_group,
+	NULL,
+};
+
 static const struct file_operations aoe_debugfs_fops = {
 	.open = aoe_debugfs_open,
 	.read = seq_read,
@@ -219,17 +224,6 @@ aoedisk_rm_debugfs(struct aoedev *d)
 	d->debugfs = NULL;
 }
 
-static int
-aoedisk_add_sysfs(struct aoedev *d)
-{
-	return sysfs_create_group(&disk_to_dev(d->gd)->kobj, &attr_group);
-}
-void
-aoedisk_rm_sysfs(struct aoedev *d)
-{
-	sysfs_remove_group(&disk_to_dev(d->gd)->kobj, &attr_group);
-}
-
 static int
 aoeblk_open(struct block_device *bdev, fmode_t mode)
 {
@@ -417,8 +411,7 @@ aoeblk_gdalloc(void *vp)
 
 	spin_unlock_irqrestore(&d->lock, flags);
 
-	add_disk(gd);
-	aoedisk_add_sysfs(d);
+	device_add_disk(NULL, gd, aoe_attr_groups);
 	aoedisk_add_debugfs(d);
 
 	spin_lock_irqsave(&d->lock, flags);
diff --git a/drivers/block/aoe/aoedev.c b/drivers/block/aoe/aoedev.c
index 41060e9cedf2..f29a140cdbc1 100644
--- a/drivers/block/aoe/aoedev.c
+++ b/drivers/block/aoe/aoedev.c
@@ -275,7 +275,6 @@ freedev(struct aoedev *d)
 	del_timer_sync(&d->timer);
 	if (d->gd) {
 		aoedisk_rm_debugfs(d);
-		aoedisk_rm_sysfs(d);
 		del_gendisk(d->gd);
 		put_disk(d->gd);
 		blk_cleanup_queue(d->blkq);
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 4/5] zram: register default groups with device_add_disk()
  2018-09-05  7:00 ` Hannes Reinecke
@ 2018-09-05  7:00   ` Hannes Reinecke
  -1 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-09-05  7:00 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Sagi Grimberg, Keith Busch, Bart van Assche,
	linux-block, linux-nvme, Hannes Reinecke, Hannes Reinecke,
	Minchan Kim, Nitin Gupta

Register default sysfs groups during device_add_disk() to avoid a
race condition with udev during startup.

Signed-off-by: Hannes Reinecke <hare@suse.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bart.vanassche@wdc.com>
---
 drivers/block/zram/zram_drv.c | 28 +++++++---------------------
 1 file changed, 7 insertions(+), 21 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index a1d6b5597c17..4879595200e1 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1636,6 +1636,11 @@ static const struct attribute_group zram_disk_attr_group = {
 	.attrs = zram_disk_attrs,
 };
 
+static const struct attribute_group *zram_disk_attr_groups[] = {
+	&zram_disk_attr_group,
+	NULL,
+};
+
 /*
  * Allocate and initialize new zram device. the function returns
  * '>= 0' device_id upon success, and negative value otherwise.
@@ -1716,24 +1721,14 @@ static int zram_add(void)
 
 	zram->disk->queue->backing_dev_info->capabilities |=
 			(BDI_CAP_STABLE_WRITES | BDI_CAP_SYNCHRONOUS_IO);
-	add_disk(zram->disk);
-
-	ret = sysfs_create_group(&disk_to_dev(zram->disk)->kobj,
-				&zram_disk_attr_group);
-	if (ret < 0) {
-		pr_err("Error creating sysfs group for device %d\n",
-				device_id);
-		goto out_free_disk;
-	}
+	device_add_disk(NULL, zram->disk, zram_disk_attr_groups);
+
 	strlcpy(zram->compressor, default_compressor, sizeof(zram->compressor));
 
 	zram_debugfs_register(zram);
 	pr_info("Added device: %s\n", zram->disk->disk_name);
 	return device_id;
 
-out_free_disk:
-	del_gendisk(zram->disk);
-	put_disk(zram->disk);
 out_free_queue:
 	blk_cleanup_queue(queue);
 out_free_idr:
@@ -1762,15 +1757,6 @@ static int zram_remove(struct zram *zram)
 	mutex_unlock(&bdev->bd_mutex);
 
 	zram_debugfs_unregister(zram);
-	/*
-	 * Remove sysfs first, so no one will perform a disksize
-	 * store while we destroy the devices. This also helps during
-	 * hot_remove -- zram_reset_device() is the last holder of
-	 * ->init_lock, no later/concurrent disksize_store() or any
-	 * other sysfs handlers are possible.
-	 */
-	sysfs_remove_group(&disk_to_dev(zram->disk)->kobj,
-			&zram_disk_attr_group);
 
 	/* Make sure all the pending I/O are finished */
 	fsync_bdev(bdev);
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 4/5] zram: register default groups with device_add_disk()
@ 2018-09-05  7:00   ` Hannes Reinecke
  0 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-09-05  7:00 UTC (permalink / raw)


Register default sysfs groups during device_add_disk() to avoid a
race condition with udev during startup.

Signed-off-by: Hannes Reinecke <hare at suse.com>
Cc: Minchan Kim <minchan at kernel.org>
Cc: Nitin Gupta <ngupta at vflare.org>
Reviewed-by: Christoph Hellwig <hch at lst.de>
Reviewed-by: Bart Van Assche <bart.vanassche at wdc.com>
---
 drivers/block/zram/zram_drv.c | 28 +++++++---------------------
 1 file changed, 7 insertions(+), 21 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index a1d6b5597c17..4879595200e1 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1636,6 +1636,11 @@ static const struct attribute_group zram_disk_attr_group = {
 	.attrs = zram_disk_attrs,
 };
 
+static const struct attribute_group *zram_disk_attr_groups[] = {
+	&zram_disk_attr_group,
+	NULL,
+};
+
 /*
  * Allocate and initialize new zram device. the function returns
  * '>= 0' device_id upon success, and negative value otherwise.
@@ -1716,24 +1721,14 @@ static int zram_add(void)
 
 	zram->disk->queue->backing_dev_info->capabilities |=
 			(BDI_CAP_STABLE_WRITES | BDI_CAP_SYNCHRONOUS_IO);
-	add_disk(zram->disk);
-
-	ret = sysfs_create_group(&disk_to_dev(zram->disk)->kobj,
-				&zram_disk_attr_group);
-	if (ret < 0) {
-		pr_err("Error creating sysfs group for device %d\n",
-				device_id);
-		goto out_free_disk;
-	}
+	device_add_disk(NULL, zram->disk, zram_disk_attr_groups);
+
 	strlcpy(zram->compressor, default_compressor, sizeof(zram->compressor));
 
 	zram_debugfs_register(zram);
 	pr_info("Added device: %s\n", zram->disk->disk_name);
 	return device_id;
 
-out_free_disk:
-	del_gendisk(zram->disk);
-	put_disk(zram->disk);
 out_free_queue:
 	blk_cleanup_queue(queue);
 out_free_idr:
@@ -1762,15 +1757,6 @@ static int zram_remove(struct zram *zram)
 	mutex_unlock(&bdev->bd_mutex);
 
 	zram_debugfs_unregister(zram);
-	/*
-	 * Remove sysfs first, so no one will perform a disksize
-	 * store while we destroy the devices. This also helps during
-	 * hot_remove -- zram_reset_device() is the last holder of
-	 * ->init_lock, no later/concurrent disksize_store() or any
-	 * other sysfs handlers are possible.
-	 */
-	sysfs_remove_group(&disk_to_dev(zram->disk)->kobj,
-			&zram_disk_attr_group);
 
 	/* Make sure all the pending I/O are finished */
 	fsync_bdev(bdev);
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 5/5] virtio-blk: modernize sysfs attribute creation
  2018-09-05  7:00 ` Hannes Reinecke
@ 2018-09-05  7:00   ` Hannes Reinecke
  -1 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-09-05  7:00 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Sagi Grimberg, Keith Busch, Bart van Assche,
	linux-block, linux-nvme, Hannes Reinecke, Hannes Reinecke

Use new-style DEVICE_ATTR_RO/DEVICE_ATTR_RW to create the sysfs attributes
and register the disk with default sysfs attribute groups.

Signed-off-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Bart Van Assche <bart.vanassche@wdc.com>
---
 drivers/block/virtio_blk.c | 68 ++++++++++++++++++++++++++--------------------
 1 file changed, 39 insertions(+), 29 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index fe80560000a1..086c6bb12baa 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -351,8 +351,8 @@ static int minor_to_index(int minor)
 	return minor >> PART_BITS;
 }
 
-static ssize_t virtblk_serial_show(struct device *dev,
-				struct device_attribute *attr, char *buf)
+static ssize_t serial_show(struct device *dev,
+			   struct device_attribute *attr, char *buf)
 {
 	struct gendisk *disk = dev_to_disk(dev);
 	int err;
@@ -371,7 +371,7 @@ static ssize_t virtblk_serial_show(struct device *dev,
 	return err;
 }
 
-static DEVICE_ATTR(serial, 0444, virtblk_serial_show, NULL);
+static DEVICE_ATTR_RO(serial);
 
 /* The queue's logical block size must be set before calling this */
 static void virtblk_update_capacity(struct virtio_blk *vblk, bool resize)
@@ -545,8 +545,8 @@ static const char *const virtblk_cache_types[] = {
 };
 
 static ssize_t
-virtblk_cache_type_store(struct device *dev, struct device_attribute *attr,
-			 const char *buf, size_t count)
+cache_type_store(struct device *dev, struct device_attribute *attr,
+		 const char *buf, size_t count)
 {
 	struct gendisk *disk = dev_to_disk(dev);
 	struct virtio_blk *vblk = disk->private_data;
@@ -564,8 +564,7 @@ virtblk_cache_type_store(struct device *dev, struct device_attribute *attr,
 }
 
 static ssize_t
-virtblk_cache_type_show(struct device *dev, struct device_attribute *attr,
-			 char *buf)
+cache_type_show(struct device *dev, struct device_attribute *attr, char *buf)
 {
 	struct gendisk *disk = dev_to_disk(dev);
 	struct virtio_blk *vblk = disk->private_data;
@@ -575,12 +574,38 @@ virtblk_cache_type_show(struct device *dev, struct device_attribute *attr,
 	return snprintf(buf, 40, "%s\n", virtblk_cache_types[writeback]);
 }
 
-static const struct device_attribute dev_attr_cache_type_ro =
-	__ATTR(cache_type, 0444,
-	       virtblk_cache_type_show, NULL);
-static const struct device_attribute dev_attr_cache_type_rw =
-	__ATTR(cache_type, 0644,
-	       virtblk_cache_type_show, virtblk_cache_type_store);
+static DEVICE_ATTR_RW(cache_type);
+
+static struct attribute *virtblk_attrs[] = {
+	&dev_attr_serial.attr,
+	&dev_attr_cache_type.attr,
+	NULL,
+};
+
+static umode_t virtblk_attrs_are_visible(struct kobject *kobj,
+		struct attribute *a, int n)
+{
+	struct device *dev = container_of(kobj, struct device, kobj);
+	struct gendisk *disk = dev_to_disk(dev);
+	struct virtio_blk *vblk = disk->private_data;
+	struct virtio_device *vdev = vblk->vdev;
+
+	if (a == &dev_attr_cache_type.attr &&
+	    !virtio_has_feature(vdev, VIRTIO_BLK_F_CONFIG_WCE))
+		return S_IRUGO;
+
+	return a->mode;
+}
+
+static const struct attribute_group virtblk_attr_group = {
+	.attrs = virtblk_attrs,
+	.is_visible = virtblk_attrs_are_visible,
+};
+
+static const struct attribute_group *virtblk_attr_groups[] = {
+	&virtblk_attr_group,
+	NULL,
+};
 
 static int virtblk_init_request(struct blk_mq_tag_set *set, struct request *rq,
 		unsigned int hctx_idx, unsigned int numa_node)
@@ -780,24 +805,9 @@ static int virtblk_probe(struct virtio_device *vdev)
 	virtblk_update_capacity(vblk, false);
 	virtio_device_ready(vdev);
 
-	device_add_disk(&vdev->dev, vblk->disk, NULL);
-	err = device_create_file(disk_to_dev(vblk->disk), &dev_attr_serial);
-	if (err)
-		goto out_del_disk;
-
-	if (virtio_has_feature(vdev, VIRTIO_BLK_F_CONFIG_WCE))
-		err = device_create_file(disk_to_dev(vblk->disk),
-					 &dev_attr_cache_type_rw);
-	else
-		err = device_create_file(disk_to_dev(vblk->disk),
-					 &dev_attr_cache_type_ro);
-	if (err)
-		goto out_del_disk;
+	device_add_disk(&vdev->dev, vblk->disk, virtblk_attr_groups);
 	return 0;
 
-out_del_disk:
-	del_gendisk(vblk->disk);
-	blk_cleanup_queue(vblk->disk->queue);
 out_free_tags:
 	blk_mq_free_tag_set(&vblk->tag_set);
 out_put_disk:
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 5/5] virtio-blk: modernize sysfs attribute creation
@ 2018-09-05  7:00   ` Hannes Reinecke
  0 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-09-05  7:00 UTC (permalink / raw)


Use new-style DEVICE_ATTR_RO/DEVICE_ATTR_RW to create the sysfs attributes
and register the disk with default sysfs attribute groups.

Signed-off-by: Hannes Reinecke <hare at suse.com>
Reviewed-by: Christoph Hellwig <hch at lst.de>
Acked-by: Michael S. Tsirkin <mst at redhat.com>
Reviewed-by: Bart Van Assche <bart.vanassche at wdc.com>
---
 drivers/block/virtio_blk.c | 68 ++++++++++++++++++++++++++--------------------
 1 file changed, 39 insertions(+), 29 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index fe80560000a1..086c6bb12baa 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -351,8 +351,8 @@ static int minor_to_index(int minor)
 	return minor >> PART_BITS;
 }
 
-static ssize_t virtblk_serial_show(struct device *dev,
-				struct device_attribute *attr, char *buf)
+static ssize_t serial_show(struct device *dev,
+			   struct device_attribute *attr, char *buf)
 {
 	struct gendisk *disk = dev_to_disk(dev);
 	int err;
@@ -371,7 +371,7 @@ static ssize_t virtblk_serial_show(struct device *dev,
 	return err;
 }
 
-static DEVICE_ATTR(serial, 0444, virtblk_serial_show, NULL);
+static DEVICE_ATTR_RO(serial);
 
 /* The queue's logical block size must be set before calling this */
 static void virtblk_update_capacity(struct virtio_blk *vblk, bool resize)
@@ -545,8 +545,8 @@ static const char *const virtblk_cache_types[] = {
 };
 
 static ssize_t
-virtblk_cache_type_store(struct device *dev, struct device_attribute *attr,
-			 const char *buf, size_t count)
+cache_type_store(struct device *dev, struct device_attribute *attr,
+		 const char *buf, size_t count)
 {
 	struct gendisk *disk = dev_to_disk(dev);
 	struct virtio_blk *vblk = disk->private_data;
@@ -564,8 +564,7 @@ virtblk_cache_type_store(struct device *dev, struct device_attribute *attr,
 }
 
 static ssize_t
-virtblk_cache_type_show(struct device *dev, struct device_attribute *attr,
-			 char *buf)
+cache_type_show(struct device *dev, struct device_attribute *attr, char *buf)
 {
 	struct gendisk *disk = dev_to_disk(dev);
 	struct virtio_blk *vblk = disk->private_data;
@@ -575,12 +574,38 @@ virtblk_cache_type_show(struct device *dev, struct device_attribute *attr,
 	return snprintf(buf, 40, "%s\n", virtblk_cache_types[writeback]);
 }
 
-static const struct device_attribute dev_attr_cache_type_ro =
-	__ATTR(cache_type, 0444,
-	       virtblk_cache_type_show, NULL);
-static const struct device_attribute dev_attr_cache_type_rw =
-	__ATTR(cache_type, 0644,
-	       virtblk_cache_type_show, virtblk_cache_type_store);
+static DEVICE_ATTR_RW(cache_type);
+
+static struct attribute *virtblk_attrs[] = {
+	&dev_attr_serial.attr,
+	&dev_attr_cache_type.attr,
+	NULL,
+};
+
+static umode_t virtblk_attrs_are_visible(struct kobject *kobj,
+		struct attribute *a, int n)
+{
+	struct device *dev = container_of(kobj, struct device, kobj);
+	struct gendisk *disk = dev_to_disk(dev);
+	struct virtio_blk *vblk = disk->private_data;
+	struct virtio_device *vdev = vblk->vdev;
+
+	if (a == &dev_attr_cache_type.attr &&
+	    !virtio_has_feature(vdev, VIRTIO_BLK_F_CONFIG_WCE))
+		return S_IRUGO;
+
+	return a->mode;
+}
+
+static const struct attribute_group virtblk_attr_group = {
+	.attrs = virtblk_attrs,
+	.is_visible = virtblk_attrs_are_visible,
+};
+
+static const struct attribute_group *virtblk_attr_groups[] = {
+	&virtblk_attr_group,
+	NULL,
+};
 
 static int virtblk_init_request(struct blk_mq_tag_set *set, struct request *rq,
 		unsigned int hctx_idx, unsigned int numa_node)
@@ -780,24 +805,9 @@ static int virtblk_probe(struct virtio_device *vdev)
 	virtblk_update_capacity(vblk, false);
 	virtio_device_ready(vdev);
 
-	device_add_disk(&vdev->dev, vblk->disk, NULL);
-	err = device_create_file(disk_to_dev(vblk->disk), &dev_attr_serial);
-	if (err)
-		goto out_del_disk;
-
-	if (virtio_has_feature(vdev, VIRTIO_BLK_F_CONFIG_WCE))
-		err = device_create_file(disk_to_dev(vblk->disk),
-					 &dev_attr_cache_type_rw);
-	else
-		err = device_create_file(disk_to_dev(vblk->disk),
-					 &dev_attr_cache_type_ro);
-	if (err)
-		goto out_del_disk;
+	device_add_disk(&vdev->dev, vblk->disk, virtblk_attr_groups);
 	return 0;
 
-out_del_disk:
-	del_gendisk(vblk->disk);
-	blk_cleanup_queue(vblk->disk->queue);
 out_free_tags:
 	blk_mq_free_tag_set(&vblk->tag_set);
 out_put_disk:
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-09-05  7:00   ` Hannes Reinecke
@ 2018-09-05 13:18     ` Christoph Hellwig
  -1 siblings, 0 replies; 61+ messages in thread
From: Christoph Hellwig @ 2018-09-05 13:18 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Jens Axboe, Christoph Hellwig, Sagi Grimberg, Keith Busch,
	Bart van Assche, linux-block, linux-nvme, Hannes Reinecke

On Wed, Sep 05, 2018 at 09:00:50AM +0200, Hannes Reinecke wrote:
> We should be registering the ns_id attribute as default sysfs
> attribute groups, otherwise we have a race condition between
> the uevent and the attributes appearing in sysfs.

Please give Bart credit for his work, as the lightnvm bits are almost
bigger than the rest.

> +static umode_t nvm_dev_attrs_visible(struct kobject *kobj,
> +				     struct attribute *attr, int index)
>  {
> +	struct device *dev = container_of(kobj, struct device, kobj);
> +	struct gendisk *disk = dev_to_disk(dev);
> +	struct nvme_ns *ns = disk->private_data;
>  	struct nvm_dev *ndev = ns->ndev;
> +	struct device_attribute *dev_attr =
> +		container_of(attr, typeof(*dev_attr), attr);
>  
> +	if (dev_attr->show == nvm_dev_attr_show)
> +		return attr->mode;
>  
> +	switch (ndev ? ndev->geo.major_ver_id : 0) {

How could ndev be zero here?

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-09-05 13:18     ` Christoph Hellwig
  0 siblings, 0 replies; 61+ messages in thread
From: Christoph Hellwig @ 2018-09-05 13:18 UTC (permalink / raw)


On Wed, Sep 05, 2018@09:00:50AM +0200, Hannes Reinecke wrote:
> We should be registering the ns_id attribute as default sysfs
> attribute groups, otherwise we have a race condition between
> the uevent and the attributes appearing in sysfs.

Please give Bart credit for his work, as the lightnvm bits are almost
bigger than the rest.

> +static umode_t nvm_dev_attrs_visible(struct kobject *kobj,
> +				     struct attribute *attr, int index)
>  {
> +	struct device *dev = container_of(kobj, struct device, kobj);
> +	struct gendisk *disk = dev_to_disk(dev);
> +	struct nvme_ns *ns = disk->private_data;
>  	struct nvm_dev *ndev = ns->ndev;
> +	struct device_attribute *dev_attr =
> +		container_of(attr, typeof(*dev_attr), attr);
>  
> +	if (dev_attr->show == nvm_dev_attr_show)
> +		return attr->mode;
>  
> +	switch (ndev ? ndev->geo.major_ver_id : 0) {

How could ndev be zero here?

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-09-05 13:18     ` Christoph Hellwig
@ 2018-09-05 13:32       ` Hannes Reinecke
  -1 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-09-05 13:32 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, Sagi Grimberg, Keith Busch, Bart van Assche,
	linux-block, linux-nvme, Hannes Reinecke

On 09/05/2018 03:18 PM, Christoph Hellwig wrote:
> On Wed, Sep 05, 2018 at 09:00:50AM +0200, Hannes Reinecke wrote:
>> We should be registering the ns_id attribute as default sysfs
>> attribute groups, otherwise we have a race condition between
>> the uevent and the attributes appearing in sysfs.
> 
> Please give Bart credit for his work, as the lightnvm bits are almost
> bigger than the rest.
> 
Okay, will be doing so.

>> +static umode_t nvm_dev_attrs_visible(struct kobject *kobj,
>> +				     struct attribute *attr, int index)
>>  {
>> +	struct device *dev = container_of(kobj, struct device, kobj);
>> +	struct gendisk *disk = dev_to_disk(dev);
>> +	struct nvme_ns *ns = disk->private_data;
>>  	struct nvm_dev *ndev = ns->ndev;
>> +	struct device_attribute *dev_attr =
>> +		container_of(attr, typeof(*dev_attr), attr);
>>  
>> +	if (dev_attr->show == nvm_dev_attr_show)
>> +		return attr->mode;
>>  
>> +	switch (ndev ? ndev->geo.major_ver_id : 0) {
> 
> How could ndev be zero here?
> 
For 'normal' NVMe devices (ie non-lightnvm). As we now register all
sysfs attributes (including the lightnvm ones) per default we'll need to
blank them out for non-lightnvm devices.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-09-05 13:32       ` Hannes Reinecke
  0 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-09-05 13:32 UTC (permalink / raw)


On 09/05/2018 03:18 PM, Christoph Hellwig wrote:
> On Wed, Sep 05, 2018@09:00:50AM +0200, Hannes Reinecke wrote:
>> We should be registering the ns_id attribute as default sysfs
>> attribute groups, otherwise we have a race condition between
>> the uevent and the attributes appearing in sysfs.
> 
> Please give Bart credit for his work, as the lightnvm bits are almost
> bigger than the rest.
> 
Okay, will be doing so.

>> +static umode_t nvm_dev_attrs_visible(struct kobject *kobj,
>> +				     struct attribute *attr, int index)
>>  {
>> +	struct device *dev = container_of(kobj, struct device, kobj);
>> +	struct gendisk *disk = dev_to_disk(dev);
>> +	struct nvme_ns *ns = disk->private_data;
>>  	struct nvm_dev *ndev = ns->ndev;
>> +	struct device_attribute *dev_attr =
>> +		container_of(attr, typeof(*dev_attr), attr);
>>  
>> +	if (dev_attr->show == nvm_dev_attr_show)
>> +		return attr->mode;
>>  
>> +	switch (ndev ? ndev->geo.major_ver_id : 0) {
> 
> How could ndev be zero here?
> 
For 'normal' NVMe devices (ie non-lightnvm). As we now register all
sysfs attributes (including the lightnvm ones) per default we'll need to
blank them out for non-lightnvm devices.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: F. Imend?rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-09-05 13:32       ` Hannes Reinecke
@ 2018-09-05 13:45         ` Christoph Hellwig
  -1 siblings, 0 replies; 61+ messages in thread
From: Christoph Hellwig @ 2018-09-05 13:45 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Christoph Hellwig, Jens Axboe, Sagi Grimberg, Keith Busch,
	Bart van Assche, linux-block, linux-nvme, Hannes Reinecke

On Wed, Sep 05, 2018 at 03:32:03PM +0200, Hannes Reinecke wrote:
> On 09/05/2018 03:18 PM, Christoph Hellwig wrote:
> > On Wed, Sep 05, 2018 at 09:00:50AM +0200, Hannes Reinecke wrote:
> >> We should be registering the ns_id attribute as default sysfs
> >> attribute groups, otherwise we have a race condition between
> >> the uevent and the attributes appearing in sysfs.
> > 
> > Please give Bart credit for his work, as the lightnvm bits are almost
> > bigger than the rest.
> > 
> Okay, will be doing so.
> 
> >> +static umode_t nvm_dev_attrs_visible(struct kobject *kobj,
> >> +				     struct attribute *attr, int index)
> >>  {
> >> +	struct device *dev = container_of(kobj, struct device, kobj);
> >> +	struct gendisk *disk = dev_to_disk(dev);
> >> +	struct nvme_ns *ns = disk->private_data;
> >>  	struct nvm_dev *ndev = ns->ndev;
> >> +	struct device_attribute *dev_attr =
> >> +		container_of(attr, typeof(*dev_attr), attr);
> >>  
> >> +	if (dev_attr->show == nvm_dev_attr_show)
> >> +		return attr->mode;
> >>  
> >> +	switch (ndev ? ndev->geo.major_ver_id : 0) {
> > 
> > How could ndev be zero here?
> > 
> For 'normal' NVMe devices (ie non-lightnvm). As we now register all
> sysfs attributes (including the lightnvm ones) per default we'll need to
> blank them out for non-lightnvm devices.

But then we need to exit early at the beginning of the function,
as we should not register attributes using nvm_dev_attr_show (or
anything else for that matter) either.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-09-05 13:45         ` Christoph Hellwig
  0 siblings, 0 replies; 61+ messages in thread
From: Christoph Hellwig @ 2018-09-05 13:45 UTC (permalink / raw)


On Wed, Sep 05, 2018@03:32:03PM +0200, Hannes Reinecke wrote:
> On 09/05/2018 03:18 PM, Christoph Hellwig wrote:
> > On Wed, Sep 05, 2018@09:00:50AM +0200, Hannes Reinecke wrote:
> >> We should be registering the ns_id attribute as default sysfs
> >> attribute groups, otherwise we have a race condition between
> >> the uevent and the attributes appearing in sysfs.
> > 
> > Please give Bart credit for his work, as the lightnvm bits are almost
> > bigger than the rest.
> > 
> Okay, will be doing so.
> 
> >> +static umode_t nvm_dev_attrs_visible(struct kobject *kobj,
> >> +				     struct attribute *attr, int index)
> >>  {
> >> +	struct device *dev = container_of(kobj, struct device, kobj);
> >> +	struct gendisk *disk = dev_to_disk(dev);
> >> +	struct nvme_ns *ns = disk->private_data;
> >>  	struct nvm_dev *ndev = ns->ndev;
> >> +	struct device_attribute *dev_attr =
> >> +		container_of(attr, typeof(*dev_attr), attr);
> >>  
> >> +	if (dev_attr->show == nvm_dev_attr_show)
> >> +		return attr->mode;
> >>  
> >> +	switch (ndev ? ndev->geo.major_ver_id : 0) {
> > 
> > How could ndev be zero here?
> > 
> For 'normal' NVMe devices (ie non-lightnvm). As we now register all
> sysfs attributes (including the lightnvm ones) per default we'll need to
> blank them out for non-lightnvm devices.

But then we need to exit early at the beginning of the function,
as we should not register attributes using nvm_dev_attr_show (or
anything else for that matter) either.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/5] block: genhd: add 'groups' argument to device_add_disk
  2018-09-05  7:00   ` Hannes Reinecke
@ 2018-09-05 15:24     ` Bart Van Assche
  -1 siblings, 0 replies; 61+ messages in thread
From: Bart Van Assche @ 2018-09-05 15:24 UTC (permalink / raw)
  To: Hannes Reinecke, Jens Axboe
  Cc: Christoph Hellwig, Sagi Grimberg, Keith Busch, Bart van Assche,
	linux-block, linux-nvme, Martin Wilck, Hannes Reinecke

On Wed, 2018-09-05 at 09:00 +0200, Hannes Reinecke wrote:
> Update device_add_disk() to take an 'groups' argument so that
> individual drivers can register a device with additional sysfs
> attributes.
> This avoids race condition the driver would otherwise have if these
> groups were to be created with sysfs_add_groups().

Reviewed-by: Bart Van Assche <bvanassche@acm.org>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 1/5] block: genhd: add 'groups' argument to device_add_disk
@ 2018-09-05 15:24     ` Bart Van Assche
  0 siblings, 0 replies; 61+ messages in thread
From: Bart Van Assche @ 2018-09-05 15:24 UTC (permalink / raw)


On Wed, 2018-09-05@09:00 +0200, Hannes Reinecke wrote:
> Update device_add_disk() to take an 'groups' argument so that
> individual drivers can register a device with additional sysfs
> attributes.
> This avoids race condition the driver would otherwise have if these
> groups were to be created with sysfs_add_groups().

Reviewed-by: Bart Van Assche <bvanassche at acm.org>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-09-05 13:45         ` Christoph Hellwig
@ 2018-09-06  9:56           ` Hannes Reinecke
  -1 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-09-06  9:56 UTC (permalink / raw)
  To: Christoph Hellwig, Hannes Reinecke
  Cc: Jens Axboe, Sagi Grimberg, Keith Busch, Bart van Assche,
	linux-block, linux-nvme

On 09/05/2018 03:45 PM, Christoph Hellwig wrote:
> On Wed, Sep 05, 2018 at 03:32:03PM +0200, Hannes Reinecke wrote:
>> On 09/05/2018 03:18 PM, Christoph Hellwig wrote:
>>> On Wed, Sep 05, 2018 at 09:00:50AM +0200, Hannes Reinecke wrote:
>>>> We should be registering the ns_id attribute as default sysfs
>>>> attribute groups, otherwise we have a race condition between
>>>> the uevent and the attributes appearing in sysfs.
>>>
>>> Please give Bart credit for his work, as the lightnvm bits are almost
>>> bigger than the rest.
>>>
>> Okay, will be doing so.
>>
>>>> +static umode_t nvm_dev_attrs_visible(struct kobject *kobj,
>>>> +				     struct attribute *attr, int index)
>>>>  {
>>>> +	struct device *dev = container_of(kobj, struct device, kobj);
>>>> +	struct gendisk *disk = dev_to_disk(dev);
>>>> +	struct nvme_ns *ns = disk->private_data;
>>>>  	struct nvm_dev *ndev = ns->ndev;
>>>> +	struct device_attribute *dev_attr =
>>>> +		container_of(attr, typeof(*dev_attr), attr);
>>>>  
>>>> +	if (dev_attr->show == nvm_dev_attr_show)
>>>> +		return attr->mode;
>>>>  
>>>> +	switch (ndev ? ndev->geo.major_ver_id : 0) {
>>>
>>> How could ndev be zero here?
>>>
>> For 'normal' NVMe devices (ie non-lightnvm). As we now register all
>> sysfs attributes (including the lightnvm ones) per default we'll need to
>> blank them out for non-lightnvm devices.
> 
> But then we need to exit early at the beginning of the function,
> as we should not register attributes using nvm_dev_attr_show (or
> anything else for that matter) either.
> 
Okay, will be fixing it up.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		               zSeries & Storage
hare@suse.com			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-09-06  9:56           ` Hannes Reinecke
  0 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-09-06  9:56 UTC (permalink / raw)


On 09/05/2018 03:45 PM, Christoph Hellwig wrote:
> On Wed, Sep 05, 2018@03:32:03PM +0200, Hannes Reinecke wrote:
>> On 09/05/2018 03:18 PM, Christoph Hellwig wrote:
>>> On Wed, Sep 05, 2018@09:00:50AM +0200, Hannes Reinecke wrote:
>>>> We should be registering the ns_id attribute as default sysfs
>>>> attribute groups, otherwise we have a race condition between
>>>> the uevent and the attributes appearing in sysfs.
>>>
>>> Please give Bart credit for his work, as the lightnvm bits are almost
>>> bigger than the rest.
>>>
>> Okay, will be doing so.
>>
>>>> +static umode_t nvm_dev_attrs_visible(struct kobject *kobj,
>>>> +				     struct attribute *attr, int index)
>>>>  {
>>>> +	struct device *dev = container_of(kobj, struct device, kobj);
>>>> +	struct gendisk *disk = dev_to_disk(dev);
>>>> +	struct nvme_ns *ns = disk->private_data;
>>>>  	struct nvm_dev *ndev = ns->ndev;
>>>> +	struct device_attribute *dev_attr =
>>>> +		container_of(attr, typeof(*dev_attr), attr);
>>>>  
>>>> +	if (dev_attr->show == nvm_dev_attr_show)
>>>> +		return attr->mode;
>>>>  
>>>> +	switch (ndev ? ndev->geo.major_ver_id : 0) {
>>>
>>> How could ndev be zero here?
>>>
>> For 'normal' NVMe devices (ie non-lightnvm). As we now register all
>> sysfs attributes (including the lightnvm ones) per default we'll need to
>> blank them out for non-lightnvm devices.
> 
> But then we need to exit early at the beginning of the function,
> as we should not register attributes using nvm_dev_attr_show (or
> anything else for that matter) either.
> 
Okay, will be fixing it up.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		               zSeries & Storage
hare at suse.com			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: F. Imend?rffer, J. Smithard, D. Upmanyu, G. Norton
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCHv3 0/5] genhd: register default groups with device_add_disk()
  2018-09-05  7:00 ` Hannes Reinecke
@ 2018-09-21  5:48   ` Christoph Hellwig
  -1 siblings, 0 replies; 61+ messages in thread
From: Christoph Hellwig @ 2018-09-21  5:48 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Jens Axboe, Christoph Hellwig, Sagi Grimberg, Keith Busch,
	Bart van Assche, linux-block, linux-nvme

Can you resend this with the one easy fixup pointed out?  It would
be good to finally get the race fix merged.

On Wed, Sep 05, 2018 at 09:00:48AM +0200, Hannes Reinecke wrote:
> device_add_disk() doesn't allow to register with default sysfs groups,
> which introduces a race with udev as these groups have to be created after
> the uevent has been sent.
> This patchset updates device_add_disk() to accept a 'groups' argument to
> avoid this race and updates the obvious drivers to use it.
> There are some more drivers which might benefit from this (eg loop or md),
> but the interface is not straightforward so I haven't attempted it.
> 
> As usual, comments and reviews are welcome.
> 
> Changes to v2:
> - Fold lightnvm attributes into the generic attribute group as
>   suggested by Bart
> 
> Changes to v1:
> - Drop first patch
> - Convert lightnvm sysfs attributes as suggested by Bart
> 
> Hannes Reinecke (5):
>   block: genhd: add 'groups' argument to device_add_disk
>   nvme: register ns_id attributes as default sysfs groups
>   aoe: use device_add_disk_with_groups()
>   zram: register default groups with device_add_disk()
>   virtio-blk: modernize sysfs attribute creation
> 
>  arch/um/drivers/ubd_kern.c          |   2 +-
>  block/genhd.c                       |  19 +++++--
>  drivers/block/aoe/aoe.h             |   1 -
>  drivers/block/aoe/aoeblk.c          |  21 +++----
>  drivers/block/aoe/aoedev.c          |   1 -
>  drivers/block/floppy.c              |   2 +-
>  drivers/block/mtip32xx/mtip32xx.c   |   2 +-
>  drivers/block/ps3disk.c             |   2 +-
>  drivers/block/ps3vram.c             |   2 +-
>  drivers/block/rsxx/dev.c            |   2 +-
>  drivers/block/skd_main.c            |   2 +-
>  drivers/block/sunvdc.c              |   2 +-
>  drivers/block/virtio_blk.c          |  68 +++++++++++++----------
>  drivers/block/xen-blkfront.c        |   2 +-
>  drivers/block/zram/zram_drv.c       |  28 +++-------
>  drivers/ide/ide-cd.c                |   2 +-
>  drivers/ide/ide-gd.c                |   2 +-
>  drivers/memstick/core/ms_block.c    |   2 +-
>  drivers/memstick/core/mspro_block.c |   2 +-
>  drivers/mmc/core/block.c            |   2 +-
>  drivers/mtd/mtd_blkdevs.c           |   2 +-
>  drivers/nvdimm/blk.c                |   2 +-
>  drivers/nvdimm/btt.c                |   2 +-
>  drivers/nvdimm/pmem.c               |   2 +-
>  drivers/nvme/host/core.c            |  21 +++----
>  drivers/nvme/host/lightnvm.c        | 106 +++++++++++++++---------------------
>  drivers/nvme/host/multipath.c       |  15 ++---
>  drivers/nvme/host/nvme.h            |  10 +---
>  drivers/s390/block/dasd_genhd.c     |   2 +-
>  drivers/s390/block/dcssblk.c        |   2 +-
>  drivers/s390/block/scm_blk.c        |   2 +-
>  drivers/scsi/sd.c                   |   2 +-
>  drivers/scsi/sr.c                   |   2 +-
>  include/linux/genhd.h               |   5 +-
>  34 files changed, 151 insertions(+), 190 deletions(-)
> 
> -- 
> 2.16.4
---end quoted text---

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCHv3 0/5] genhd: register default groups with device_add_disk()
@ 2018-09-21  5:48   ` Christoph Hellwig
  0 siblings, 0 replies; 61+ messages in thread
From: Christoph Hellwig @ 2018-09-21  5:48 UTC (permalink / raw)


Can you resend this with the one easy fixup pointed out?  It would
be good to finally get the race fix merged.

On Wed, Sep 05, 2018@09:00:48AM +0200, Hannes Reinecke wrote:
> device_add_disk() doesn't allow to register with default sysfs groups,
> which introduces a race with udev as these groups have to be created after
> the uevent has been sent.
> This patchset updates device_add_disk() to accept a 'groups' argument to
> avoid this race and updates the obvious drivers to use it.
> There are some more drivers which might benefit from this (eg loop or md),
> but the interface is not straightforward so I haven't attempted it.
> 
> As usual, comments and reviews are welcome.
> 
> Changes to v2:
> - Fold lightnvm attributes into the generic attribute group as
>   suggested by Bart
> 
> Changes to v1:
> - Drop first patch
> - Convert lightnvm sysfs attributes as suggested by Bart
> 
> Hannes Reinecke (5):
>   block: genhd: add 'groups' argument to device_add_disk
>   nvme: register ns_id attributes as default sysfs groups
>   aoe: use device_add_disk_with_groups()
>   zram: register default groups with device_add_disk()
>   virtio-blk: modernize sysfs attribute creation
> 
>  arch/um/drivers/ubd_kern.c          |   2 +-
>  block/genhd.c                       |  19 +++++--
>  drivers/block/aoe/aoe.h             |   1 -
>  drivers/block/aoe/aoeblk.c          |  21 +++----
>  drivers/block/aoe/aoedev.c          |   1 -
>  drivers/block/floppy.c              |   2 +-
>  drivers/block/mtip32xx/mtip32xx.c   |   2 +-
>  drivers/block/ps3disk.c             |   2 +-
>  drivers/block/ps3vram.c             |   2 +-
>  drivers/block/rsxx/dev.c            |   2 +-
>  drivers/block/skd_main.c            |   2 +-
>  drivers/block/sunvdc.c              |   2 +-
>  drivers/block/virtio_blk.c          |  68 +++++++++++++----------
>  drivers/block/xen-blkfront.c        |   2 +-
>  drivers/block/zram/zram_drv.c       |  28 +++-------
>  drivers/ide/ide-cd.c                |   2 +-
>  drivers/ide/ide-gd.c                |   2 +-
>  drivers/memstick/core/ms_block.c    |   2 +-
>  drivers/memstick/core/mspro_block.c |   2 +-
>  drivers/mmc/core/block.c            |   2 +-
>  drivers/mtd/mtd_blkdevs.c           |   2 +-
>  drivers/nvdimm/blk.c                |   2 +-
>  drivers/nvdimm/btt.c                |   2 +-
>  drivers/nvdimm/pmem.c               |   2 +-
>  drivers/nvme/host/core.c            |  21 +++----
>  drivers/nvme/host/lightnvm.c        | 106 +++++++++++++++---------------------
>  drivers/nvme/host/multipath.c       |  15 ++---
>  drivers/nvme/host/nvme.h            |  10 +---
>  drivers/s390/block/dasd_genhd.c     |   2 +-
>  drivers/s390/block/dcssblk.c        |   2 +-
>  drivers/s390/block/scm_blk.c        |   2 +-
>  drivers/scsi/sd.c                   |   2 +-
>  drivers/scsi/sr.c                   |   2 +-
>  include/linux/genhd.h               |   5 +-
>  34 files changed, 151 insertions(+), 190 deletions(-)
> 
> -- 
> 2.16.4
---end quoted text---

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCHv3 0/5] genhd: register default groups with device_add_disk()
  2018-09-21  5:48   ` Christoph Hellwig
@ 2018-09-27 19:02     ` Bart Van Assche
  -1 siblings, 0 replies; 61+ messages in thread
From: Bart Van Assche @ 2018-09-27 19:02 UTC (permalink / raw)
  To: Christoph Hellwig, Hannes Reinecke
  Cc: Jens Axboe, linux-block, Sagi Grimberg, linux-nvme, Keith Busch,
	Bart van Assche

On Fri, 2018-09-21 at 07:48 +0200, Christoph Hellwig wrote:
> Can you resend this with the one easy fixup pointed out?  It would
> be good to finally get the race fix merged.

Seconded. I also would like to see these patches being merged upstream.

Bart.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCHv3 0/5] genhd: register default groups with device_add_disk()
@ 2018-09-27 19:02     ` Bart Van Assche
  0 siblings, 0 replies; 61+ messages in thread
From: Bart Van Assche @ 2018-09-27 19:02 UTC (permalink / raw)


On Fri, 2018-09-21@07:48 +0200, Christoph Hellwig wrote:
> Can you resend this with the one easy fixup pointed out?  It would
> be good to finally get the race fix merged.

Seconded. I also would like to see these patches being merged upstream.

Bart.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-09-28  6:17   ` Hannes Reinecke
@ 2018-09-28 15:17     ` Christoph Hellwig
  -1 siblings, 0 replies; 61+ messages in thread
From: Christoph Hellwig @ 2018-09-28 15:17 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Christoph Hellwig, Bart van Assche, Jens Axboe, Sagi Grimberg,
	Keith Busch, linux-block, linux-nvme, Hannes Reinecke

On Fri, Sep 28, 2018 at 08:17:20AM +0200, Hannes Reinecke wrote:
> We should be registering the ns_id attribute as default sysfs
> attribute groups, otherwise we have a race condition between
> the uevent and the attributes appearing in sysfs.

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-09-28 15:17     ` Christoph Hellwig
  0 siblings, 0 replies; 61+ messages in thread
From: Christoph Hellwig @ 2018-09-28 15:17 UTC (permalink / raw)


On Fri, Sep 28, 2018@08:17:20AM +0200, Hannes Reinecke wrote:
> We should be registering the ns_id attribute as default sysfs
> attribute groups, otherwise we have a race condition between
> the uevent and the attributes appearing in sysfs.

Looks good,

Reviewed-by: Christoph Hellwig <hch at lst.de>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-09-28  6:17   ` Hannes Reinecke
@ 2018-09-28 14:15     ` Keith Busch
  -1 siblings, 0 replies; 61+ messages in thread
From: Keith Busch @ 2018-09-28 14:15 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Christoph Hellwig, Jens Axboe, linux-block, Hannes Reinecke,
	Bart van Assche, linux-nvme, Keith Busch, Sagi Grimberg

On Fri, Sep 28, 2018 at 08:17:20AM +0200, Hannes Reinecke wrote:
> We should be registering the ns_id attribute as default sysfs
> attribute groups, otherwise we have a race condition between
> the uevent and the attributes appearing in sysfs.
> 
> Suggested-by: Bart van Assche <bvanassche@acm.org>
> Signed-off-by: Hannes Reinecke <hare@suse.com>

Thanks, looks great!

Reviewed-by: Keith Busch <keith.busch@intel.com>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-09-28 14:15     ` Keith Busch
  0 siblings, 0 replies; 61+ messages in thread
From: Keith Busch @ 2018-09-28 14:15 UTC (permalink / raw)


On Fri, Sep 28, 2018@08:17:20AM +0200, Hannes Reinecke wrote:
> We should be registering the ns_id attribute as default sysfs
> attribute groups, otherwise we have a race condition between
> the uevent and the attributes appearing in sysfs.
> 
> Suggested-by: Bart van Assche <bvanassche at acm.org>
> Signed-off-by: Hannes Reinecke <hare at suse.com>

Thanks, looks great!

Reviewed-by: Keith Busch <keith.busch at intel.com>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-09-28  6:17 [PATCHv4 " Hannes Reinecke
@ 2018-09-28  6:17   ` Hannes Reinecke
  0 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-09-28  6:17 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Bart van Assche, Jens Axboe, Sagi Grimberg, Keith Busch,
	linux-block, linux-nvme, Hannes Reinecke, Hannes Reinecke

We should be registering the ns_id attribute as default sysfs
attribute groups, otherwise we have a race condition between
the uevent and the attributes appearing in sysfs.

Suggested-by: Bart van Assche <bvanassche@acm.org>
Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/nvme/host/core.c      |  21 ++++-----
 drivers/nvme/host/lightnvm.c  | 105 ++++++++++++++++++------------------------
 drivers/nvme/host/multipath.c |  15 ++----
 drivers/nvme/host/nvme.h      |  10 +---
 4 files changed, 59 insertions(+), 92 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 0e824e8c8fd7..e0a9e1c5b30e 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2734,6 +2734,14 @@ const struct attribute_group nvme_ns_id_attr_group = {
 	.is_visible	= nvme_ns_id_attrs_are_visible,
 };
 
+const struct attribute_group *nvme_ns_id_attr_groups[] = {
+	&nvme_ns_id_attr_group,
+#ifdef CONFIG_NVM
+	&nvme_nvm_attr_group,
+#endif
+	NULL,
+};
+
 #define nvme_show_str_function(field)						\
 static ssize_t  field##_show(struct device *dev,				\
 			    struct device_attribute *attr, char *buf)		\
@@ -3099,14 +3107,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 
 	nvme_get_ctrl(ctrl);
 
-	device_add_disk(ctrl->device, ns->disk, NULL);
-	if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvme_ns_id_attr_group))
-		pr_warn("%s: failed to create sysfs group for identification\n",
-			ns->disk->disk_name);
-	if (ns->ndev && nvme_nvm_register_sysfs(ns))
-		pr_warn("%s: failed to register lightnvm sysfs group for identification\n",
-			ns->disk->disk_name);
+	device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
 
 	nvme_mpath_add_disk(ns, id);
 	nvme_fault_inject_init(ns);
@@ -3132,10 +3133,6 @@ static void nvme_ns_remove(struct nvme_ns *ns)
 
 	nvme_fault_inject_fini(ns);
 	if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
-		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
-					&nvme_ns_id_attr_group);
-		if (ns->ndev)
-			nvme_nvm_unregister_sysfs(ns);
 		del_gendisk(ns->disk);
 		blk_cleanup_queue(ns->queue);
 		if (blk_get_integrity(ns->disk))
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 6fe5923c95d4..1e4f97538838 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -1190,10 +1190,29 @@ static NVM_DEV_ATTR_12_RO(multiplane_modes);
 static NVM_DEV_ATTR_12_RO(media_capabilities);
 static NVM_DEV_ATTR_12_RO(max_phys_secs);
 
-static struct attribute *nvm_dev_attrs_12[] = {
+/* 2.0 values */
+static NVM_DEV_ATTR_20_RO(groups);
+static NVM_DEV_ATTR_20_RO(punits);
+static NVM_DEV_ATTR_20_RO(chunks);
+static NVM_DEV_ATTR_20_RO(clba);
+static NVM_DEV_ATTR_20_RO(ws_min);
+static NVM_DEV_ATTR_20_RO(ws_opt);
+static NVM_DEV_ATTR_20_RO(maxoc);
+static NVM_DEV_ATTR_20_RO(maxocpu);
+static NVM_DEV_ATTR_20_RO(mw_cunits);
+static NVM_DEV_ATTR_20_RO(write_typ);
+static NVM_DEV_ATTR_20_RO(write_max);
+static NVM_DEV_ATTR_20_RO(reset_typ);
+static NVM_DEV_ATTR_20_RO(reset_max);
+
+static struct attribute *nvm_dev_attrs[] = {
+	/* version agnostic attrs */
 	&dev_attr_version.attr,
 	&dev_attr_capabilities.attr,
+	&dev_attr_read_typ.attr,
+	&dev_attr_read_max.attr,
 
+	/* 1.2 attrs */
 	&dev_attr_vendor_opcode.attr,
 	&dev_attr_device_mode.attr,
 	&dev_attr_media_manager.attr,
@@ -1208,8 +1227,6 @@ static struct attribute *nvm_dev_attrs_12[] = {
 	&dev_attr_page_size.attr,
 	&dev_attr_hw_sector_size.attr,
 	&dev_attr_oob_sector_size.attr,
-	&dev_attr_read_typ.attr,
-	&dev_attr_read_max.attr,
 	&dev_attr_prog_typ.attr,
 	&dev_attr_prog_max.attr,
 	&dev_attr_erase_typ.attr,
@@ -1218,33 +1235,7 @@ static struct attribute *nvm_dev_attrs_12[] = {
 	&dev_attr_media_capabilities.attr,
 	&dev_attr_max_phys_secs.attr,
 
-	NULL,
-};
-
-static const struct attribute_group nvm_dev_attr_group_12 = {
-	.name		= "lightnvm",
-	.attrs		= nvm_dev_attrs_12,
-};
-
-/* 2.0 values */
-static NVM_DEV_ATTR_20_RO(groups);
-static NVM_DEV_ATTR_20_RO(punits);
-static NVM_DEV_ATTR_20_RO(chunks);
-static NVM_DEV_ATTR_20_RO(clba);
-static NVM_DEV_ATTR_20_RO(ws_min);
-static NVM_DEV_ATTR_20_RO(ws_opt);
-static NVM_DEV_ATTR_20_RO(maxoc);
-static NVM_DEV_ATTR_20_RO(maxocpu);
-static NVM_DEV_ATTR_20_RO(mw_cunits);
-static NVM_DEV_ATTR_20_RO(write_typ);
-static NVM_DEV_ATTR_20_RO(write_max);
-static NVM_DEV_ATTR_20_RO(reset_typ);
-static NVM_DEV_ATTR_20_RO(reset_max);
-
-static struct attribute *nvm_dev_attrs_20[] = {
-	&dev_attr_version.attr,
-	&dev_attr_capabilities.attr,
-
+	/* 2.0 attrs */
 	&dev_attr_groups.attr,
 	&dev_attr_punits.attr,
 	&dev_attr_chunks.attr,
@@ -1255,8 +1246,6 @@ static struct attribute *nvm_dev_attrs_20[] = {
 	&dev_attr_maxocpu.attr,
 	&dev_attr_mw_cunits.attr,
 
-	&dev_attr_read_typ.attr,
-	&dev_attr_read_max.attr,
 	&dev_attr_write_typ.attr,
 	&dev_attr_write_max.attr,
 	&dev_attr_reset_typ.attr,
@@ -1265,44 +1254,38 @@ static struct attribute *nvm_dev_attrs_20[] = {
 	NULL,
 };
 
-static const struct attribute_group nvm_dev_attr_group_20 = {
-	.name		= "lightnvm",
-	.attrs		= nvm_dev_attrs_20,
-};
-
-int nvme_nvm_register_sysfs(struct nvme_ns *ns)
+static umode_t nvm_dev_attrs_visible(struct kobject *kobj,
+				     struct attribute *attr, int index)
 {
+	struct device *dev = container_of(kobj, struct device, kobj);
+	struct gendisk *disk = dev_to_disk(dev);
+	struct nvme_ns *ns = disk->private_data;
 	struct nvm_dev *ndev = ns->ndev;
-	struct nvm_geo *geo = &ndev->geo;
+	struct device_attribute *dev_attr =
+		container_of(attr, typeof(*dev_attr), attr);
 
 	if (!ndev)
-		return -EINVAL;
-
-	switch (geo->major_ver_id) {
-	case 1:
-		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_12);
-	case 2:
-		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_20);
-	}
-
-	return -EINVAL;
-}
+		return 0;
 
-void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
-{
-	struct nvm_dev *ndev = ns->ndev;
-	struct nvm_geo *geo = &ndev->geo;
+	if (dev_attr->show == nvm_dev_attr_show)
+		return attr->mode;
 
-	switch (geo->major_ver_id) {
+	switch (ndev->geo.major_ver_id) {
 	case 1:
-		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_12);
+		if (dev_attr->show == nvm_dev_attr_show_12)
+			return attr->mode;
 		break;
 	case 2:
-		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_20);
+		if (dev_attr->show == nvm_dev_attr_show_20)
+			return attr->mode;
 		break;
 	}
+
+	return 0;
 }
+
+const struct attribute_group nvme_nvm_attr_group = {
+	.name		= "lightnvm",
+	.attrs		= nvm_dev_attrs,
+	.is_visible	= nvm_dev_attrs_visible,
+};
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 477af51d01e8..8e846095c42d 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -282,13 +282,9 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
 	if (!head->disk)
 		return;
 
-	if (!(head->disk->flags & GENHD_FL_UP)) {
-		device_add_disk(&head->subsys->dev, head->disk, NULL);
-		if (sysfs_create_group(&disk_to_dev(head->disk)->kobj,
-				&nvme_ns_id_attr_group))
-			dev_warn(&head->subsys->dev,
-				 "failed to create id group.\n");
-	}
+	if (!(head->disk->flags & GENHD_FL_UP))
+		device_add_disk(&head->subsys->dev, head->disk,
+				nvme_ns_id_attr_groups);
 
 	kblockd_schedule_work(&ns->head->requeue_work);
 }
@@ -494,11 +490,8 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
 {
 	if (!head->disk)
 		return;
-	if (head->disk->flags & GENHD_FL_UP) {
-		sysfs_remove_group(&disk_to_dev(head->disk)->kobj,
-				   &nvme_ns_id_attr_group);
+	if (head->disk->flags & GENHD_FL_UP)
 		del_gendisk(head->disk);
-	}
 	blk_set_queue_dying(head->disk->queue);
 	/* make sure all pending bios are cleaned up */
 	kblockd_schedule_work(&head->requeue_work);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index bb4a2003c097..2503f8fd54da 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -459,7 +459,7 @@ int nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl);
 int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp,
 		void *log, size_t size, u64 offset);
 
-extern const struct attribute_group nvme_ns_id_attr_group;
+extern const struct attribute_group *nvme_ns_id_attr_groups[];
 extern const struct block_device_operations nvme_ns_head_ops;
 
 #ifdef CONFIG_NVME_MULTIPATH
@@ -551,8 +551,7 @@ static inline void nvme_mpath_stop(struct nvme_ctrl *ctrl)
 void nvme_nvm_update_nvm_info(struct nvme_ns *ns);
 int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
 void nvme_nvm_unregister(struct nvme_ns *ns);
-int nvme_nvm_register_sysfs(struct nvme_ns *ns);
-void nvme_nvm_unregister_sysfs(struct nvme_ns *ns);
+extern const struct attribute_group nvme_nvm_attr_group;
 int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg);
 #else
 static inline void nvme_nvm_update_nvm_info(struct nvme_ns *ns) {};
@@ -563,11 +562,6 @@ static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
 }
 
 static inline void nvme_nvm_unregister(struct nvme_ns *ns) {};
-static inline int nvme_nvm_register_sysfs(struct nvme_ns *ns)
-{
-	return 0;
-}
-static inline void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) {};
 static inline int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd,
 							unsigned long arg)
 {
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-09-28  6:17   ` Hannes Reinecke
  0 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-09-28  6:17 UTC (permalink / raw)


We should be registering the ns_id attribute as default sysfs
attribute groups, otherwise we have a race condition between
the uevent and the attributes appearing in sysfs.

Suggested-by: Bart van Assche <bvanassche at acm.org>
Signed-off-by: Hannes Reinecke <hare at suse.com>
---
 drivers/nvme/host/core.c      |  21 ++++-----
 drivers/nvme/host/lightnvm.c  | 105 ++++++++++++++++++------------------------
 drivers/nvme/host/multipath.c |  15 ++----
 drivers/nvme/host/nvme.h      |  10 +---
 4 files changed, 59 insertions(+), 92 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 0e824e8c8fd7..e0a9e1c5b30e 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2734,6 +2734,14 @@ const struct attribute_group nvme_ns_id_attr_group = {
 	.is_visible	= nvme_ns_id_attrs_are_visible,
 };
 
+const struct attribute_group *nvme_ns_id_attr_groups[] = {
+	&nvme_ns_id_attr_group,
+#ifdef CONFIG_NVM
+	&nvme_nvm_attr_group,
+#endif
+	NULL,
+};
+
 #define nvme_show_str_function(field)						\
 static ssize_t  field##_show(struct device *dev,				\
 			    struct device_attribute *attr, char *buf)		\
@@ -3099,14 +3107,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 
 	nvme_get_ctrl(ctrl);
 
-	device_add_disk(ctrl->device, ns->disk, NULL);
-	if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvme_ns_id_attr_group))
-		pr_warn("%s: failed to create sysfs group for identification\n",
-			ns->disk->disk_name);
-	if (ns->ndev && nvme_nvm_register_sysfs(ns))
-		pr_warn("%s: failed to register lightnvm sysfs group for identification\n",
-			ns->disk->disk_name);
+	device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
 
 	nvme_mpath_add_disk(ns, id);
 	nvme_fault_inject_init(ns);
@@ -3132,10 +3133,6 @@ static void nvme_ns_remove(struct nvme_ns *ns)
 
 	nvme_fault_inject_fini(ns);
 	if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
-		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
-					&nvme_ns_id_attr_group);
-		if (ns->ndev)
-			nvme_nvm_unregister_sysfs(ns);
 		del_gendisk(ns->disk);
 		blk_cleanup_queue(ns->queue);
 		if (blk_get_integrity(ns->disk))
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 6fe5923c95d4..1e4f97538838 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -1190,10 +1190,29 @@ static NVM_DEV_ATTR_12_RO(multiplane_modes);
 static NVM_DEV_ATTR_12_RO(media_capabilities);
 static NVM_DEV_ATTR_12_RO(max_phys_secs);
 
-static struct attribute *nvm_dev_attrs_12[] = {
+/* 2.0 values */
+static NVM_DEV_ATTR_20_RO(groups);
+static NVM_DEV_ATTR_20_RO(punits);
+static NVM_DEV_ATTR_20_RO(chunks);
+static NVM_DEV_ATTR_20_RO(clba);
+static NVM_DEV_ATTR_20_RO(ws_min);
+static NVM_DEV_ATTR_20_RO(ws_opt);
+static NVM_DEV_ATTR_20_RO(maxoc);
+static NVM_DEV_ATTR_20_RO(maxocpu);
+static NVM_DEV_ATTR_20_RO(mw_cunits);
+static NVM_DEV_ATTR_20_RO(write_typ);
+static NVM_DEV_ATTR_20_RO(write_max);
+static NVM_DEV_ATTR_20_RO(reset_typ);
+static NVM_DEV_ATTR_20_RO(reset_max);
+
+static struct attribute *nvm_dev_attrs[] = {
+	/* version agnostic attrs */
 	&dev_attr_version.attr,
 	&dev_attr_capabilities.attr,
+	&dev_attr_read_typ.attr,
+	&dev_attr_read_max.attr,
 
+	/* 1.2 attrs */
 	&dev_attr_vendor_opcode.attr,
 	&dev_attr_device_mode.attr,
 	&dev_attr_media_manager.attr,
@@ -1208,8 +1227,6 @@ static struct attribute *nvm_dev_attrs_12[] = {
 	&dev_attr_page_size.attr,
 	&dev_attr_hw_sector_size.attr,
 	&dev_attr_oob_sector_size.attr,
-	&dev_attr_read_typ.attr,
-	&dev_attr_read_max.attr,
 	&dev_attr_prog_typ.attr,
 	&dev_attr_prog_max.attr,
 	&dev_attr_erase_typ.attr,
@@ -1218,33 +1235,7 @@ static struct attribute *nvm_dev_attrs_12[] = {
 	&dev_attr_media_capabilities.attr,
 	&dev_attr_max_phys_secs.attr,
 
-	NULL,
-};
-
-static const struct attribute_group nvm_dev_attr_group_12 = {
-	.name		= "lightnvm",
-	.attrs		= nvm_dev_attrs_12,
-};
-
-/* 2.0 values */
-static NVM_DEV_ATTR_20_RO(groups);
-static NVM_DEV_ATTR_20_RO(punits);
-static NVM_DEV_ATTR_20_RO(chunks);
-static NVM_DEV_ATTR_20_RO(clba);
-static NVM_DEV_ATTR_20_RO(ws_min);
-static NVM_DEV_ATTR_20_RO(ws_opt);
-static NVM_DEV_ATTR_20_RO(maxoc);
-static NVM_DEV_ATTR_20_RO(maxocpu);
-static NVM_DEV_ATTR_20_RO(mw_cunits);
-static NVM_DEV_ATTR_20_RO(write_typ);
-static NVM_DEV_ATTR_20_RO(write_max);
-static NVM_DEV_ATTR_20_RO(reset_typ);
-static NVM_DEV_ATTR_20_RO(reset_max);
-
-static struct attribute *nvm_dev_attrs_20[] = {
-	&dev_attr_version.attr,
-	&dev_attr_capabilities.attr,
-
+	/* 2.0 attrs */
 	&dev_attr_groups.attr,
 	&dev_attr_punits.attr,
 	&dev_attr_chunks.attr,
@@ -1255,8 +1246,6 @@ static struct attribute *nvm_dev_attrs_20[] = {
 	&dev_attr_maxocpu.attr,
 	&dev_attr_mw_cunits.attr,
 
-	&dev_attr_read_typ.attr,
-	&dev_attr_read_max.attr,
 	&dev_attr_write_typ.attr,
 	&dev_attr_write_max.attr,
 	&dev_attr_reset_typ.attr,
@@ -1265,44 +1254,38 @@ static struct attribute *nvm_dev_attrs_20[] = {
 	NULL,
 };
 
-static const struct attribute_group nvm_dev_attr_group_20 = {
-	.name		= "lightnvm",
-	.attrs		= nvm_dev_attrs_20,
-};
-
-int nvme_nvm_register_sysfs(struct nvme_ns *ns)
+static umode_t nvm_dev_attrs_visible(struct kobject *kobj,
+				     struct attribute *attr, int index)
 {
+	struct device *dev = container_of(kobj, struct device, kobj);
+	struct gendisk *disk = dev_to_disk(dev);
+	struct nvme_ns *ns = disk->private_data;
 	struct nvm_dev *ndev = ns->ndev;
-	struct nvm_geo *geo = &ndev->geo;
+	struct device_attribute *dev_attr =
+		container_of(attr, typeof(*dev_attr), attr);
 
 	if (!ndev)
-		return -EINVAL;
-
-	switch (geo->major_ver_id) {
-	case 1:
-		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_12);
-	case 2:
-		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_20);
-	}
-
-	return -EINVAL;
-}
+		return 0;
 
-void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
-{
-	struct nvm_dev *ndev = ns->ndev;
-	struct nvm_geo *geo = &ndev->geo;
+	if (dev_attr->show == nvm_dev_attr_show)
+		return attr->mode;
 
-	switch (geo->major_ver_id) {
+	switch (ndev->geo.major_ver_id) {
 	case 1:
-		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_12);
+		if (dev_attr->show == nvm_dev_attr_show_12)
+			return attr->mode;
 		break;
 	case 2:
-		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_20);
+		if (dev_attr->show == nvm_dev_attr_show_20)
+			return attr->mode;
 		break;
 	}
+
+	return 0;
 }
+
+const struct attribute_group nvme_nvm_attr_group = {
+	.name		= "lightnvm",
+	.attrs		= nvm_dev_attrs,
+	.is_visible	= nvm_dev_attrs_visible,
+};
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 477af51d01e8..8e846095c42d 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -282,13 +282,9 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
 	if (!head->disk)
 		return;
 
-	if (!(head->disk->flags & GENHD_FL_UP)) {
-		device_add_disk(&head->subsys->dev, head->disk, NULL);
-		if (sysfs_create_group(&disk_to_dev(head->disk)->kobj,
-				&nvme_ns_id_attr_group))
-			dev_warn(&head->subsys->dev,
-				 "failed to create id group.\n");
-	}
+	if (!(head->disk->flags & GENHD_FL_UP))
+		device_add_disk(&head->subsys->dev, head->disk,
+				nvme_ns_id_attr_groups);
 
 	kblockd_schedule_work(&ns->head->requeue_work);
 }
@@ -494,11 +490,8 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
 {
 	if (!head->disk)
 		return;
-	if (head->disk->flags & GENHD_FL_UP) {
-		sysfs_remove_group(&disk_to_dev(head->disk)->kobj,
-				   &nvme_ns_id_attr_group);
+	if (head->disk->flags & GENHD_FL_UP)
 		del_gendisk(head->disk);
-	}
 	blk_set_queue_dying(head->disk->queue);
 	/* make sure all pending bios are cleaned up */
 	kblockd_schedule_work(&head->requeue_work);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index bb4a2003c097..2503f8fd54da 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -459,7 +459,7 @@ int nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl);
 int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp,
 		void *log, size_t size, u64 offset);
 
-extern const struct attribute_group nvme_ns_id_attr_group;
+extern const struct attribute_group *nvme_ns_id_attr_groups[];
 extern const struct block_device_operations nvme_ns_head_ops;
 
 #ifdef CONFIG_NVME_MULTIPATH
@@ -551,8 +551,7 @@ static inline void nvme_mpath_stop(struct nvme_ctrl *ctrl)
 void nvme_nvm_update_nvm_info(struct nvme_ns *ns);
 int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
 void nvme_nvm_unregister(struct nvme_ns *ns);
-int nvme_nvm_register_sysfs(struct nvme_ns *ns);
-void nvme_nvm_unregister_sysfs(struct nvme_ns *ns);
+extern const struct attribute_group nvme_nvm_attr_group;
 int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg);
 #else
 static inline void nvme_nvm_update_nvm_info(struct nvme_ns *ns) {};
@@ -563,11 +562,6 @@ static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
 }
 
 static inline void nvme_nvm_unregister(struct nvme_ns *ns) {};
-static inline int nvme_nvm_register_sysfs(struct nvme_ns *ns)
-{
-	return 0;
-}
-static inline void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) {};
 static inline int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd,
 							unsigned long arg)
 {
-- 
2.16.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-08-17 20:04             ` Bart Van Assche
@ 2018-08-20  6:34               ` Hannes Reinecke
  -1 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-08-20  6:34 UTC (permalink / raw)
  To: Bart Van Assche, hch
  Cc: linux-kernel, linux-block, sagi, axboe, hare, linux-nvme, keith.busch

On 08/17/2018 10:04 PM, Bart Van Assche wrote:
> On Fri, 2018-08-17 at 09:00 +0200, hch@lst.de wrote:
>> On Tue, Aug 14, 2018 at 03:44:57PM +0000, Bart Van Assche wrote:
>>> On Tue, 2018-08-14 at 17:39 +0200, Hannes Reinecke wrote:
>>>> While I have considered having nvme_nvm_register_sysfs() returning a
>>>> pointer I would then have to remove the 'static' declaration from the
>>>> nvm_dev_attr_group_12/20.
>>>> Which I didn't really like, either.
>>>
>>> Hmm ... I don't see why the static declaration would have to be removed from
>>> nvm_dev_attr_group_12/20 if nvme_nvm_register_sysfs() would return a pointer?
>>> Am I perhaps missing something?
>>
>> No, I think that would be the preferable approach IFF patching the global
>> table of groups would be viable.  I don't think it is, though - we can
>> have both normal NVMe and LightNVM devices in the same system, so we
>> can't just patch it over.
>>
>> So we'll need three different attribut group arrays:
>>
>> const struct attribute_group *nvme_ns_id_attr_groups[] = {
>> 	&nvme_ns_id_attr_group,
>> 	NULL,
>> };
>>
>> const struct attribute_group *lightnvm12_ns_id_attr_groups[] = {
>> 	&nvme_ns_id_attr_group,
>> 	&nvm_dev_attr_group_12,
>> 	NULL,
>> };
>>
>> const struct attribute_group *lightnvm20_ns_id_attr_groups[] = {
>> 	&nvme_ns_id_attr_group,
>> 	&nvm_dev_attr_group_20,
>> 	NULL,
>> };
>>
>> and a function to select which one to use.
> 
> Hello Christoph,
> 
> How about applying the patch below on top of Hannes' patch? The patch below
> has the advantage that it completely separates the open channel sysfs
> attributes from the NVMe core sysfs attributes - the open channel code
> doesn't have to know anything about the NVMe core sysfs attributes and the
> NVMe core does not have to know anything about the open channel sysfs
> attributes.
> 
Yes, this looks like the best approach.
I'll fold it into my patchset.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-08-20  6:34               ` Hannes Reinecke
  0 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-08-20  6:34 UTC (permalink / raw)


On 08/17/2018 10:04 PM, Bart Van Assche wrote:
> On Fri, 2018-08-17@09:00 +0200, hch@lst.de wrote:
>> On Tue, Aug 14, 2018@03:44:57PM +0000, Bart Van Assche wrote:
>>> On Tue, 2018-08-14@17:39 +0200, Hannes Reinecke wrote:
>>>> While I have considered having nvme_nvm_register_sysfs() returning a
>>>> pointer I would then have to remove the 'static' declaration from the
>>>> nvm_dev_attr_group_12/20.
>>>> Which I didn't really like, either.
>>>
>>> Hmm ... I don't see why the static declaration would have to be removed from
>>> nvm_dev_attr_group_12/20 if nvme_nvm_register_sysfs() would return a pointer?
>>> Am I perhaps missing something?
>>
>> No, I think that would be the preferable approach IFF patching the global
>> table of groups would be viable.  I don't think it is, though - we can
>> have both normal NVMe and LightNVM devices in the same system, so we
>> can't just patch it over.
>>
>> So we'll need three different attribut group arrays:
>>
>> const struct attribute_group *nvme_ns_id_attr_groups[] = {
>> 	&nvme_ns_id_attr_group,
>> 	NULL,
>> };
>>
>> const struct attribute_group *lightnvm12_ns_id_attr_groups[] = {
>> 	&nvme_ns_id_attr_group,
>> 	&nvm_dev_attr_group_12,
>> 	NULL,
>> };
>>
>> const struct attribute_group *lightnvm20_ns_id_attr_groups[] = {
>> 	&nvme_ns_id_attr_group,
>> 	&nvm_dev_attr_group_20,
>> 	NULL,
>> };
>>
>> and a function to select which one to use.
> 
> Hello Christoph,
> 
> How about applying the patch below on top of Hannes' patch? The patch below
> has the advantage that it completely separates the open channel sysfs
> attributes from the NVMe core sysfs attributes - the open channel code
> doesn't have to know anything about the NVMe core sysfs attributes and the
> NVMe core does not have to know anything about the open channel sysfs
> attributes.
> 
Yes, this looks like the best approach.
I'll fold it into my patchset.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: F. Imend?rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-08-17 20:04             ` Bart Van Assche
@ 2018-08-17 22:47               ` Sagi Grimberg
  -1 siblings, 0 replies; 61+ messages in thread
From: Sagi Grimberg @ 2018-08-17 22:47 UTC (permalink / raw)
  To: Bart Van Assche, hch
  Cc: linux-kernel, linux-block, hare, axboe, hare, linux-nvme, keith.busch


> Hello Christoph,
> 
> How about applying the patch below on top of Hannes' patch? The patch below
> has the advantage that it completely separates the open channel sysfs
> attributes from the NVMe core sysfs attributes - the open channel code
> doesn't have to know anything about the NVMe core sysfs attributes and the
> NVMe core does not have to know anything about the open channel sysfs
> attributes.

This looks better to me Bart (unless I'm missing something)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-08-17 22:47               ` Sagi Grimberg
  0 siblings, 0 replies; 61+ messages in thread
From: Sagi Grimberg @ 2018-08-17 22:47 UTC (permalink / raw)



> Hello Christoph,
> 
> How about applying the patch below on top of Hannes' patch? The patch below
> has the advantage that it completely separates the open channel sysfs
> attributes from the NVMe core sysfs attributes - the open channel code
> doesn't have to know anything about the NVMe core sysfs attributes and the
> NVMe core does not have to know anything about the open channel sysfs
> attributes.

This looks better to me Bart (unless I'm missing something)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-08-17  7:00           ` hch
  (?)
@ 2018-08-17 20:04             ` Bart Van Assche
  -1 siblings, 0 replies; 61+ messages in thread
From: Bart Van Assche @ 2018-08-17 20:04 UTC (permalink / raw)
  To: hch
  Cc: linux-kernel, linux-block, sagi, hare, axboe, hare, linux-nvme,
	keith.busch

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-7", Size: 10839 bytes --]

On Fri, 2018-08-17 at 09:00 +-0200, hch+AEA-lst.de wrote:
+AD4- On Tue, Aug 14, 2018 at 03:44:57PM +-0000, Bart Van Assche wrote:
+AD4- +AD4- On Tue, 2018-08-14 at 17:39 +-0200, Hannes Reinecke wrote:
+AD4- +AD4- +AD4- While I have considered having nvme+AF8-nvm+AF8-register+=
AF8-sysfs() returning a
+AD4- +AD4- +AD4- pointer I would then have to remove the 'static' declarat=
ion from the
+AD4- +AD4- +AD4- nvm+AF8-dev+AF8-attr+AF8-group+AF8-12/20.
+AD4- +AD4- +AD4- Which I didn't really like, either.
+AD4- +AD4-=20
+AD4- +AD4- Hmm ... I don't see why the static declaration would have to be=
 removed from
+AD4- +AD4- nvm+AF8-dev+AF8-attr+AF8-group+AF8-12/20 if nvme+AF8-nvm+AF8-re=
gister+AF8-sysfs() would return a pointer?
+AD4- +AD4- Am I perhaps missing something?
+AD4-=20
+AD4- No, I think that would be the preferable approach IFF patching the gl=
obal
+AD4- table of groups would be viable.  I don't think it is, though - we ca=
n
+AD4- have both normal NVMe and LightNVM devices in the same system, so we
+AD4- can't just patch it over.
+AD4-=20
+AD4- So we'll need three different attribut group arrays:
+AD4-=20
+AD4- const struct attribute+AF8-group +ACo-nvme+AF8-ns+AF8-id+AF8-attr+AF8=
-groups+AFsAXQ- +AD0- +AHs-
+AD4- 	+ACY-nvme+AF8-ns+AF8-id+AF8-attr+AF8-group,
+AD4- 	NULL,
+AD4- +AH0AOw-
+AD4-=20
+AD4- const struct attribute+AF8-group +ACo-lightnvm12+AF8-ns+AF8-id+AF8-at=
tr+AF8-groups+AFsAXQ- +AD0- +AHs-
+AD4- 	+ACY-nvme+AF8-ns+AF8-id+AF8-attr+AF8-group,
+AD4- 	+ACY-nvm+AF8-dev+AF8-attr+AF8-group+AF8-12,
+AD4- 	NULL,
+AD4- +AH0AOw-
+AD4-=20
+AD4- const struct attribute+AF8-group +ACo-lightnvm20+AF8-ns+AF8-id+AF8-at=
tr+AF8-groups+AFsAXQ- +AD0- +AHs-
+AD4- 	+ACY-nvme+AF8-ns+AF8-id+AF8-attr+AF8-group,
+AD4- 	+ACY-nvm+AF8-dev+AF8-attr+AF8-group+AF8-20,
+AD4- 	NULL,
+AD4- +AH0AOw-
+AD4-=20
+AD4- and a function to select which one to use.

Hello Christoph,

How about applying the patch below on top of Hannes' patch? The patch below
has the advantage that it completely separates the open channel sysfs
attributes from the NVMe core sysfs attributes - the open channel code
doesn't have to know anything about the NVMe core sysfs attributes and the
NVMe core does not have to know anything about the open channel sysfs
attributes.

Thanks,

Bart.


diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 8e26d98e9a8f..e0a9e1c5b30e 100644
--- a/drivers/nvme/host/core.c
+-+-+- b/drivers/nvme/host/core.c
+AEAAQA- -2736,7 +-2736,9 +AEAAQA- const struct attribute+AF8-group nvme+AF=
8-ns+AF8-id+AF8-attr+AF8-group +AD0- +AHs-
=20
 const struct attribute+AF8-group +ACo-nvme+AF8-ns+AF8-id+AF8-attr+AF8-grou=
ps+AFsAXQ- +AD0- +AHs-
 	+ACY-nvme+AF8-ns+AF8-id+AF8-attr+AF8-group,
-	NULL, /+ACo- Will be filled in by lightnvm if present +ACo-/
+-+ACM-ifdef CONFIG+AF8-NVM
+-	+ACY-nvme+AF8-nvm+AF8-attr+AF8-group,
+-+ACM-endif
 	NULL,
 +AH0AOw-
=20
+AEAAQA- -3105,8 +-3107,6 +AEAAQA- static void nvme+AF8-alloc+AF8-ns(struct=
 nvme+AF8-ctrl +ACo-ctrl, unsigned nsid)
=20
 	nvme+AF8-get+AF8-ctrl(ctrl)+ADs-
=20
-	if (ns-+AD4-ndev)
-		nvme+AF8-nvm+AF8-register+AF8-sysfs(ns)+ADs-
 	device+AF8-add+AF8-disk(ctrl-+AD4-device, ns-+AD4-disk, nvme+AF8-ns+AF8-i=
d+AF8-attr+AF8-groups)+ADs-
=20
 	nvme+AF8-mpath+AF8-add+AF8-disk(ns, id)+ADs-
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 7bf2f9da6293..b7b19d3561a4 100644
--- a/drivers/nvme/host/lightnvm.c
+-+-+- b/drivers/nvme/host/lightnvm.c
+AEAAQA- -1190,10 +-1190,29 +AEAAQA- static NVM+AF8-DEV+AF8-ATTR+AF8-12+AF8=
-RO(multiplane+AF8-modes)+ADs-
 static NVM+AF8-DEV+AF8-ATTR+AF8-12+AF8-RO(media+AF8-capabilities)+ADs-
 static NVM+AF8-DEV+AF8-ATTR+AF8-12+AF8-RO(max+AF8-phys+AF8-secs)+ADs-
=20
-static struct attribute +ACo-nvm+AF8-dev+AF8-attrs+AF8-12+AFsAXQ- +AD0- +A=
Hs-
+-/+ACo- 2.0 values +ACo-/
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(groups)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(punits)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(chunks)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(clba)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(ws+AF8-min)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(ws+AF8-opt)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(maxoc)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(maxocpu)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(mw+AF8-cunits)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(write+AF8-typ)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(write+AF8-max)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(reset+AF8-typ)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(reset+AF8-max)+ADs-
+-
+-static struct attribute +ACo-nvm+AF8-dev+AF8-attrs+AFsAXQ- +AD0- +AHs-
+-	/+ACo- version agnostic attrs +ACo-/
 	+ACY-dev+AF8-attr+AF8-version.attr,
 	+ACY-dev+AF8-attr+AF8-capabilities.attr,
+-	+ACY-dev+AF8-attr+AF8-read+AF8-typ.attr,
+-	+ACY-dev+AF8-attr+AF8-read+AF8-max.attr,
=20
+-	/+ACo- 1.2 attrs +ACo-/
 	+ACY-dev+AF8-attr+AF8-vendor+AF8-opcode.attr,
 	+ACY-dev+AF8-attr+AF8-device+AF8-mode.attr,
 	+ACY-dev+AF8-attr+AF8-media+AF8-manager.attr,
+AEAAQA- -1208,8 +-1227,6 +AEAAQA- static struct attribute +ACo-nvm+AF8-dev=
+AF8-attrs+AF8-12+AFsAXQ- +AD0- +AHs-
 	+ACY-dev+AF8-attr+AF8-page+AF8-size.attr,
 	+ACY-dev+AF8-attr+AF8-hw+AF8-sector+AF8-size.attr,
 	+ACY-dev+AF8-attr+AF8-oob+AF8-sector+AF8-size.attr,
-	+ACY-dev+AF8-attr+AF8-read+AF8-typ.attr,
-	+ACY-dev+AF8-attr+AF8-read+AF8-max.attr,
 	+ACY-dev+AF8-attr+AF8-prog+AF8-typ.attr,
 	+ACY-dev+AF8-attr+AF8-prog+AF8-max.attr,
 	+ACY-dev+AF8-attr+AF8-erase+AF8-typ.attr,
+AEAAQA- -1218,33 +-1235,7 +AEAAQA- static struct attribute +ACo-nvm+AF8-de=
v+AF8-attrs+AF8-12+AFsAXQ- +AD0- +AHs-
 	+ACY-dev+AF8-attr+AF8-media+AF8-capabilities.attr,
 	+ACY-dev+AF8-attr+AF8-max+AF8-phys+AF8-secs.attr,
=20
-	NULL,
-+AH0AOw-
-
-static const struct attribute+AF8-group nvm+AF8-dev+AF8-attr+AF8-group+AF8=
-12 +AD0- +AHs-
-	.name		+AD0- +ACI-lightnvm+ACI-,
-	.attrs		+AD0- nvm+AF8-dev+AF8-attrs+AF8-12,
-+AH0AOw-
-
-/+ACo- 2.0 values +ACo-/
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(groups)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(punits)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(chunks)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(clba)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(ws+AF8-min)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(ws+AF8-opt)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(maxoc)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(maxocpu)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(mw+AF8-cunits)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(write+AF8-typ)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(write+AF8-max)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(reset+AF8-typ)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(reset+AF8-max)+ADs-
-
-static struct attribute +ACo-nvm+AF8-dev+AF8-attrs+AF8-20+AFsAXQ- +AD0- +A=
Hs-
-	+ACY-dev+AF8-attr+AF8-version.attr,
-	+ACY-dev+AF8-attr+AF8-capabilities.attr,
-
+-	/+ACo- 2.0 attrs +ACo-/
 	+ACY-dev+AF8-attr+AF8-groups.attr,
 	+ACY-dev+AF8-attr+AF8-punits.attr,
 	+ACY-dev+AF8-attr+AF8-chunks.attr,
+AEAAQA- -1255,8 +-1246,6 +AEAAQA- static struct attribute +ACo-nvm+AF8-dev=
+AF8-attrs+AF8-20+AFsAXQ- +AD0- +AHs-
 	+ACY-dev+AF8-attr+AF8-maxocpu.attr,
 	+ACY-dev+AF8-attr+AF8-mw+AF8-cunits.attr,
=20
-	+ACY-dev+AF8-attr+AF8-read+AF8-typ.attr,
-	+ACY-dev+AF8-attr+AF8-read+AF8-max.attr,
 	+ACY-dev+AF8-attr+AF8-write+AF8-typ.attr,
 	+ACY-dev+AF8-attr+AF8-write+AF8-max.attr,
 	+ACY-dev+AF8-attr+AF8-reset+AF8-typ.attr,
+AEAAQA- -1265,25 +-1254,35 +AEAAQA- static struct attribute +ACo-nvm+AF8-d=
ev+AF8-attrs+AF8-20+AFsAXQ- +AD0- +AHs-
 	NULL,
 +AH0AOw-
=20
-static const struct attribute+AF8-group nvm+AF8-dev+AF8-attr+AF8-group+AF8=
-20 +AD0- +AHs-
-	.name		+AD0- +ACI-lightnvm+ACI-,
-	.attrs		+AD0- nvm+AF8-dev+AF8-attrs+AF8-20,
-+AH0AOw-
-
-void nvme+AF8-nvm+AF8-register+AF8-sysfs(struct nvme+AF8-ns +ACo-ns)
+-static umode+AF8-t nvm+AF8-dev+AF8-attrs+AF8-visible(struct kobject +ACo-=
kobj,
+-				     struct attribute +ACo-attr, int index)
 +AHs-
+-	struct device +ACo-dev +AD0- container+AF8-of(kobj, struct device, kobj)=
+ADs-
+-	struct gendisk +ACo-disk +AD0- dev+AF8-to+AF8-disk(dev)+ADs-
+-	struct nvme+AF8-ns +ACo-ns +AD0- disk-+AD4-private+AF8-data+ADs-
 	struct nvm+AF8-dev +ACo-ndev +AD0- ns-+AD4-ndev+ADs-
-	struct nvm+AF8-geo +ACo-geo +AD0- +ACY-ndev-+AD4-geo+ADs-
+-	struct device+AF8-attribute +ACo-dev+AF8-attr +AD0-
+-		container+AF8-of(attr, typeof(+ACo-dev+AF8-attr), attr)+ADs-
=20
-	if (+ACE-ndev)
-		return+ADs-
+-	if (dev+AF8-attr-+AD4-show +AD0APQ- nvm+AF8-dev+AF8-attr+AF8-show)
+-		return attr-+AD4-mode+ADs-
=20
-	switch (geo-+AD4-major+AF8-ver+AF8-id) +AHs-
+-	switch (ndev ? ndev-+AD4-geo.major+AF8-ver+AF8-id : 0) +AHs-
 	case 1:
-		nvme+AF8-ns+AF8-id+AF8-attr+AF8-groups+AFs-1+AF0- +AD0- +ACY-nvm+AF8-dev=
+AF8-attr+AF8-group+AF8-12+ADs-
+-		if (dev+AF8-attr-+AD4-show +AD0APQ- nvm+AF8-dev+AF8-attr+AF8-show+AF8-1=
2)
+-			return attr-+AD4-mode+ADs-
 		break+ADs-
 	case 2:
-		nvme+AF8-ns+AF8-id+AF8-attr+AF8-groups+AFs-1+AF0- +AD0- +ACY-nvm+AF8-dev=
+AF8-attr+AF8-group+AF8-20+ADs-
+-		if (dev+AF8-attr-+AD4-show +AD0APQ- nvm+AF8-dev+AF8-attr+AF8-show+AF8-2=
0)
+-			return attr-+AD4-mode+ADs-
 		break+ADs-
 	+AH0-
+-
+-	return 0+ADs-
 +AH0-
+-
+-const struct attribute+AF8-group nvm+AF8-dev+AF8-attr+AF8-group +AD0- +AH=
s-
+-	.name		+AD0- +ACI-lightnvm+ACI-,
+-	.attrs		+AD0- nvm+AF8-dev+AF8-attrs,
+-	.is+AF8-visible	+AD0- nvm+AF8-dev+AF8-attrs+AF8-visible,
+-+AH0AOw-
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 9ba6d67d8e0a..2503f8fd54da 100644
--- a/drivers/nvme/host/nvme.h
+-+-+- b/drivers/nvme/host/nvme.h
+AEAAQA- -551,7 +-551,7 +AEAAQA- static inline void nvme+AF8-mpath+AF8-stop=
(struct nvme+AF8-ctrl +ACo-ctrl)
 void nvme+AF8-nvm+AF8-update+AF8-nvm+AF8-info(struct nvme+AF8-ns +ACo-ns)+=
ADs-
 int nvme+AF8-nvm+AF8-register(struct nvme+AF8-ns +ACo-ns, char +ACo-disk+A=
F8-name, int node)+ADs-
 void nvme+AF8-nvm+AF8-unregister(struct nvme+AF8-ns +ACo-ns)+ADs-
-void nvme+AF8-nvm+AF8-register+AF8-sysfs(struct nvme+AF8-ns +ACo-ns)+ADs-
+-extern const struct attribute+AF8-group nvme+AF8-nvm+AF8-attr+AF8-group+A=
Ds-
 int nvme+AF8-nvm+AF8-ioctl(struct nvme+AF8-ns +ACo-ns, unsigned int cmd, u=
nsigned long arg)+ADs-
 +ACM-else
 static inline void nvme+AF8-nvm+AF8-update+AF8-nvm+AF8-info(struct nvme+AF=
8-ns +ACo-ns) +AHsAfQA7-
+AEAAQA- -562,7 +-562,6 +AEAAQA- static inline int nvme+AF8-nvm+AF8-registe=
r(struct nvme+AF8-ns +ACo-ns, char +ACo-disk+AF8-name,
 +AH0-
=20
 static inline void nvme+AF8-nvm+AF8-unregister(struct nvme+AF8-ns +ACo-ns)=
 +AHsAfQA7-
-static inline void nvme+AF8-nvm+AF8-register+AF8-sysfs(struct nvme+AF8-ns =
+ACo-ns) +AHsAfQA7-
 static inline int nvme+AF8-nvm+AF8-ioctl(struct nvme+AF8-ns +ACo-ns, unsig=
ned int cmd,
 							unsigned long arg)
 +AHs-

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-08-17 20:04             ` Bart Van Assche
  0 siblings, 0 replies; 61+ messages in thread
From: Bart Van Assche @ 2018-08-17 20:04 UTC (permalink / raw)
  To: hch
  Cc: linux-kernel, linux-block, sagi, hare, axboe, hare, linux-nvme,
	keith.busch

On Fri, 2018-08-17 at 09:00 +0200, hch@lst.de wrote:
> On Tue, Aug 14, 2018 at 03:44:57PM +0000, Bart Van Assche wrote:
> > On Tue, 2018-08-14 at 17:39 +0200, Hannes Reinecke wrote:
> > > While I have considered having nvme_nvm_register_sysfs() returning a
> > > pointer I would then have to remove the 'static' declaration from the
> > > nvm_dev_attr_group_12/20.
> > > Which I didn't really like, either.
> > 
> > Hmm ... I don't see why the static declaration would have to be removed from
> > nvm_dev_attr_group_12/20 if nvme_nvm_register_sysfs() would return a pointer?
> > Am I perhaps missing something?
> 
> No, I think that would be the preferable approach IFF patching the global
> table of groups would be viable.  I don't think it is, though - we can
> have both normal NVMe and LightNVM devices in the same system, so we
> can't just patch it over.
> 
> So we'll need three different attribut group arrays:
> 
> const struct attribute_group *nvme_ns_id_attr_groups[] = {
> 	&nvme_ns_id_attr_group,
> 	NULL,
> };
> 
> const struct attribute_group *lightnvm12_ns_id_attr_groups[] = {
> 	&nvme_ns_id_attr_group,
> 	&nvm_dev_attr_group_12,
> 	NULL,
> };
> 
> const struct attribute_group *lightnvm20_ns_id_attr_groups[] = {
> 	&nvme_ns_id_attr_group,
> 	&nvm_dev_attr_group_20,
> 	NULL,
> };
> 
> and a function to select which one to use.

Hello Christoph,

How about applying the patch below on top of Hannes' patch? The patch below
has the advantage that it completely separates the open channel sysfs
attributes from the NVMe core sysfs attributes - the open channel code
doesn't have to know anything about the NVMe core sysfs attributes and the
NVMe core does not have to know anything about the open channel sysfs
attributes.

Thanks,

Bart.


diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 8e26d98e9a8f..e0a9e1c5b30e 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2736,7 +2736,9 @@ const struct attribute_group nvme_ns_id_attr_group = {
 
 const struct attribute_group *nvme_ns_id_attr_groups[] = {
 	&nvme_ns_id_attr_group,
-	NULL, /* Will be filled in by lightnvm if present */
+#ifdef CONFIG_NVM
+	&nvme_nvm_attr_group,
+#endif
 	NULL,
 };
 
@@ -3105,8 +3107,6 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 
 	nvme_get_ctrl(ctrl);
 
-	if (ns->ndev)
-		nvme_nvm_register_sysfs(ns);
 	device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
 
 	nvme_mpath_add_disk(ns, id);
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 7bf2f9da6293..b7b19d3561a4 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -1190,10 +1190,29 @@ static NVM_DEV_ATTR_12_RO(multiplane_modes);
 static NVM_DEV_ATTR_12_RO(media_capabilities);
 static NVM_DEV_ATTR_12_RO(max_phys_secs);
 
-static struct attribute *nvm_dev_attrs_12[] = {
+/* 2.0 values */
+static NVM_DEV_ATTR_20_RO(groups);
+static NVM_DEV_ATTR_20_RO(punits);
+static NVM_DEV_ATTR_20_RO(chunks);
+static NVM_DEV_ATTR_20_RO(clba);
+static NVM_DEV_ATTR_20_RO(ws_min);
+static NVM_DEV_ATTR_20_RO(ws_opt);
+static NVM_DEV_ATTR_20_RO(maxoc);
+static NVM_DEV_ATTR_20_RO(maxocpu);
+static NVM_DEV_ATTR_20_RO(mw_cunits);
+static NVM_DEV_ATTR_20_RO(write_typ);
+static NVM_DEV_ATTR_20_RO(write_max);
+static NVM_DEV_ATTR_20_RO(reset_typ);
+static NVM_DEV_ATTR_20_RO(reset_max);
+
+static struct attribute *nvm_dev_attrs[] = {
+	/* version agnostic attrs */
 	&dev_attr_version.attr,
 	&dev_attr_capabilities.attr,
+	&dev_attr_read_typ.attr,
+	&dev_attr_read_max.attr,
 
+	/* 1.2 attrs */
 	&dev_attr_vendor_opcode.attr,
 	&dev_attr_device_mode.attr,
 	&dev_attr_media_manager.attr,
@@ -1208,8 +1227,6 @@ static struct attribute *nvm_dev_attrs_12[] = {
 	&dev_attr_page_size.attr,
 	&dev_attr_hw_sector_size.attr,
 	&dev_attr_oob_sector_size.attr,
-	&dev_attr_read_typ.attr,
-	&dev_attr_read_max.attr,
 	&dev_attr_prog_typ.attr,
 	&dev_attr_prog_max.attr,
 	&dev_attr_erase_typ.attr,
@@ -1218,33 +1235,7 @@ static struct attribute *nvm_dev_attrs_12[] = {
 	&dev_attr_media_capabilities.attr,
 	&dev_attr_max_phys_secs.attr,
 
-	NULL,
-};
-
-static const struct attribute_group nvm_dev_attr_group_12 = {
-	.name		= "lightnvm",
-	.attrs		= nvm_dev_attrs_12,
-};
-
-/* 2.0 values */
-static NVM_DEV_ATTR_20_RO(groups);
-static NVM_DEV_ATTR_20_RO(punits);
-static NVM_DEV_ATTR_20_RO(chunks);
-static NVM_DEV_ATTR_20_RO(clba);
-static NVM_DEV_ATTR_20_RO(ws_min);
-static NVM_DEV_ATTR_20_RO(ws_opt);
-static NVM_DEV_ATTR_20_RO(maxoc);
-static NVM_DEV_ATTR_20_RO(maxocpu);
-static NVM_DEV_ATTR_20_RO(mw_cunits);
-static NVM_DEV_ATTR_20_RO(write_typ);
-static NVM_DEV_ATTR_20_RO(write_max);
-static NVM_DEV_ATTR_20_RO(reset_typ);
-static NVM_DEV_ATTR_20_RO(reset_max);
-
-static struct attribute *nvm_dev_attrs_20[] = {
-	&dev_attr_version.attr,
-	&dev_attr_capabilities.attr,
-
+	/* 2.0 attrs */
 	&dev_attr_groups.attr,
 	&dev_attr_punits.attr,
 	&dev_attr_chunks.attr,
@@ -1255,8 +1246,6 @@ static struct attribute *nvm_dev_attrs_20[] = {
 	&dev_attr_maxocpu.attr,
 	&dev_attr_mw_cunits.attr,
 
-	&dev_attr_read_typ.attr,
-	&dev_attr_read_max.attr,
 	&dev_attr_write_typ.attr,
 	&dev_attr_write_max.attr,
 	&dev_attr_reset_typ.attr,
@@ -1265,25 +1254,35 @@ static struct attribute *nvm_dev_attrs_20[] = {
 	NULL,
 };
 
-static const struct attribute_group nvm_dev_attr_group_20 = {
-	.name		= "lightnvm",
-	.attrs		= nvm_dev_attrs_20,
-};
-
-void nvme_nvm_register_sysfs(struct nvme_ns *ns)
+static umode_t nvm_dev_attrs_visible(struct kobject *kobj,
+				     struct attribute *attr, int index)
 {
+	struct device *dev = container_of(kobj, struct device, kobj);
+	struct gendisk *disk = dev_to_disk(dev);
+	struct nvme_ns *ns = disk->private_data;
 	struct nvm_dev *ndev = ns->ndev;
-	struct nvm_geo *geo = &ndev->geo;
+	struct device_attribute *dev_attr =
+		container_of(attr, typeof(*dev_attr), attr);
 
-	if (!ndev)
-		return;
+	if (dev_attr->show == nvm_dev_attr_show)
+		return attr->mode;
 
-	switch (geo->major_ver_id) {
+	switch (ndev ? ndev->geo.major_ver_id : 0) {
 	case 1:
-		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_12;
+		if (dev_attr->show == nvm_dev_attr_show_12)
+			return attr->mode;
 		break;
 	case 2:
-		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_20;
+		if (dev_attr->show == nvm_dev_attr_show_20)
+			return attr->mode;
 		break;
 	}
+
+	return 0;
 }
+
+const struct attribute_group nvm_dev_attr_group = {
+	.name		= "lightnvm",
+	.attrs		= nvm_dev_attrs,
+	.is_visible	= nvm_dev_attrs_visible,
+};
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 9ba6d67d8e0a..2503f8fd54da 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -551,7 +551,7 @@ static inline void nvme_mpath_stop(struct nvme_ctrl *ctrl)
 void nvme_nvm_update_nvm_info(struct nvme_ns *ns);
 int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
 void nvme_nvm_unregister(struct nvme_ns *ns);
-void nvme_nvm_register_sysfs(struct nvme_ns *ns);
+extern const struct attribute_group nvme_nvm_attr_group;
 int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg);
 #else
 static inline void nvme_nvm_update_nvm_info(struct nvme_ns *ns) {};
@@ -562,7 +562,6 @@ static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
 }
 
 static inline void nvme_nvm_unregister(struct nvme_ns *ns) {};
-static inline void nvme_nvm_register_sysfs(struct nvme_ns *ns) {};
 static inline int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd,
 							unsigned long arg)
 {


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-08-17 20:04             ` Bart Van Assche
  0 siblings, 0 replies; 61+ messages in thread
From: Bart Van Assche @ 2018-08-17 20:04 UTC (permalink / raw)


On Fri, 2018-08-17@09:00 +0200, hch@lst.de wrote:
> On Tue, Aug 14, 2018@03:44:57PM +0000, Bart Van Assche wrote:
> > On Tue, 2018-08-14@17:39 +0200, Hannes Reinecke wrote:
> > > While I have considered having nvme_nvm_register_sysfs() returning a
> > > pointer I would then have to remove the 'static' declaration from the
> > > nvm_dev_attr_group_12/20.
> > > Which I didn't really like, either.
> > 
> > Hmm ... I don't see why the static declaration would have to be removed from
> > nvm_dev_attr_group_12/20 if nvme_nvm_register_sysfs() would return a pointer?
> > Am I perhaps missing something?
> 
> No, I think that would be the preferable approach IFF patching the global
> table of groups would be viable.  I don't think it is, though - we can
> have both normal NVMe and LightNVM devices in the same system, so we
> can't just patch it over.
> 
> So we'll need three different attribut group arrays:
> 
> const struct attribute_group *nvme_ns_id_attr_groups[] = {
> 	&nvme_ns_id_attr_group,
> 	NULL,
> };
> 
> const struct attribute_group *lightnvm12_ns_id_attr_groups[] = {
> 	&nvme_ns_id_attr_group,
> 	&nvm_dev_attr_group_12,
> 	NULL,
> };
> 
> const struct attribute_group *lightnvm20_ns_id_attr_groups[] = {
> 	&nvme_ns_id_attr_group,
> 	&nvm_dev_attr_group_20,
> 	NULL,
> };
> 
> and a function to select which one to use.

Hello Christoph,

How about applying the patch below on top of Hannes' patch? The patch below
has the advantage that it completely separates the open channel sysfs
attributes from the NVMe core sysfs attributes - the open channel code
doesn't have to know anything about the NVMe core sysfs attributes and the
NVMe core does not have to know anything about the open channel sysfs
attributes.

Thanks,

Bart.


diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 8e26d98e9a8f..e0a9e1c5b30e 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2736,7 +2736,9 @@ const struct attribute_group nvme_ns_id_attr_group = {
 
 const struct attribute_group *nvme_ns_id_attr_groups[] = {
 	&nvme_ns_id_attr_group,
-	NULL, /* Will be filled in by lightnvm if present */
+#ifdef CONFIG_NVM
+	&nvme_nvm_attr_group,
+#endif
 	NULL,
 };
 
@@ -3105,8 +3107,6 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 
 	nvme_get_ctrl(ctrl);
 
-	if (ns->ndev)
-		nvme_nvm_register_sysfs(ns);
 	device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
 
 	nvme_mpath_add_disk(ns, id);
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 7bf2f9da6293..b7b19d3561a4 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -1190,10 +1190,29 @@ static NVM_DEV_ATTR_12_RO(multiplane_modes);
 static NVM_DEV_ATTR_12_RO(media_capabilities);
 static NVM_DEV_ATTR_12_RO(max_phys_secs);
 
-static struct attribute *nvm_dev_attrs_12[] = {
+/* 2.0 values */
+static NVM_DEV_ATTR_20_RO(groups);
+static NVM_DEV_ATTR_20_RO(punits);
+static NVM_DEV_ATTR_20_RO(chunks);
+static NVM_DEV_ATTR_20_RO(clba);
+static NVM_DEV_ATTR_20_RO(ws_min);
+static NVM_DEV_ATTR_20_RO(ws_opt);
+static NVM_DEV_ATTR_20_RO(maxoc);
+static NVM_DEV_ATTR_20_RO(maxocpu);
+static NVM_DEV_ATTR_20_RO(mw_cunits);
+static NVM_DEV_ATTR_20_RO(write_typ);
+static NVM_DEV_ATTR_20_RO(write_max);
+static NVM_DEV_ATTR_20_RO(reset_typ);
+static NVM_DEV_ATTR_20_RO(reset_max);
+
+static struct attribute *nvm_dev_attrs[] = {
+	/* version agnostic attrs */
 	&dev_attr_version.attr,
 	&dev_attr_capabilities.attr,
+	&dev_attr_read_typ.attr,
+	&dev_attr_read_max.attr,
 
+	/* 1.2 attrs */
 	&dev_attr_vendor_opcode.attr,
 	&dev_attr_device_mode.attr,
 	&dev_attr_media_manager.attr,
@@ -1208,8 +1227,6 @@ static struct attribute *nvm_dev_attrs_12[] = {
 	&dev_attr_page_size.attr,
 	&dev_attr_hw_sector_size.attr,
 	&dev_attr_oob_sector_size.attr,
-	&dev_attr_read_typ.attr,
-	&dev_attr_read_max.attr,
 	&dev_attr_prog_typ.attr,
 	&dev_attr_prog_max.attr,
 	&dev_attr_erase_typ.attr,
@@ -1218,33 +1235,7 @@ static struct attribute *nvm_dev_attrs_12[] = {
 	&dev_attr_media_capabilities.attr,
 	&dev_attr_max_phys_secs.attr,
 
-	NULL,
-};
-
-static const struct attribute_group nvm_dev_attr_group_12 = {
-	.name		= "lightnvm",
-	.attrs		= nvm_dev_attrs_12,
-};
-
-/* 2.0 values */
-static NVM_DEV_ATTR_20_RO(groups);
-static NVM_DEV_ATTR_20_RO(punits);
-static NVM_DEV_ATTR_20_RO(chunks);
-static NVM_DEV_ATTR_20_RO(clba);
-static NVM_DEV_ATTR_20_RO(ws_min);
-static NVM_DEV_ATTR_20_RO(ws_opt);
-static NVM_DEV_ATTR_20_RO(maxoc);
-static NVM_DEV_ATTR_20_RO(maxocpu);
-static NVM_DEV_ATTR_20_RO(mw_cunits);
-static NVM_DEV_ATTR_20_RO(write_typ);
-static NVM_DEV_ATTR_20_RO(write_max);
-static NVM_DEV_ATTR_20_RO(reset_typ);
-static NVM_DEV_ATTR_20_RO(reset_max);
-
-static struct attribute *nvm_dev_attrs_20[] = {
-	&dev_attr_version.attr,
-	&dev_attr_capabilities.attr,
-
+	/* 2.0 attrs */
 	&dev_attr_groups.attr,
 	&dev_attr_punits.attr,
 	&dev_attr_chunks.attr,
@@ -1255,8 +1246,6 @@ static struct attribute *nvm_dev_attrs_20[] = {
 	&dev_attr_maxocpu.attr,
 	&dev_attr_mw_cunits.attr,
 
-	&dev_attr_read_typ.attr,
-	&dev_attr_read_max.attr,
 	&dev_attr_write_typ.attr,
 	&dev_attr_write_max.attr,
 	&dev_attr_reset_typ.attr,
@@ -1265,25 +1254,35 @@ static struct attribute *nvm_dev_attrs_20[] = {
 	NULL,
 };
 
-static const struct attribute_group nvm_dev_attr_group_20 = {
-	.name		= "lightnvm",
-	.attrs		= nvm_dev_attrs_20,
-};
-
-void nvme_nvm_register_sysfs(struct nvme_ns *ns)
+static umode_t nvm_dev_attrs_visible(struct kobject *kobj,
+				     struct attribute *attr, int index)
 {
+	struct device *dev = container_of(kobj, struct device, kobj);
+	struct gendisk *disk = dev_to_disk(dev);
+	struct nvme_ns *ns = disk->private_data;
 	struct nvm_dev *ndev = ns->ndev;
-	struct nvm_geo *geo = &ndev->geo;
+	struct device_attribute *dev_attr =
+		container_of(attr, typeof(*dev_attr), attr);
 
-	if (!ndev)
-		return;
+	if (dev_attr->show == nvm_dev_attr_show)
+		return attr->mode;
 
-	switch (geo->major_ver_id) {
+	switch (ndev ? ndev->geo.major_ver_id : 0) {
 	case 1:
-		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_12;
+		if (dev_attr->show == nvm_dev_attr_show_12)
+			return attr->mode;
 		break;
 	case 2:
-		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_20;
+		if (dev_attr->show == nvm_dev_attr_show_20)
+			return attr->mode;
 		break;
 	}
+
+	return 0;
 }
+
+const struct attribute_group nvm_dev_attr_group = {
+	.name		= "lightnvm",
+	.attrs		= nvm_dev_attrs,
+	.is_visible	= nvm_dev_attrs_visible,
+};
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 9ba6d67d8e0a..2503f8fd54da 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -551,7 +551,7 @@ static inline void nvme_mpath_stop(struct nvme_ctrl *ctrl)
 void nvme_nvm_update_nvm_info(struct nvme_ns *ns);
 int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
 void nvme_nvm_unregister(struct nvme_ns *ns);
-void nvme_nvm_register_sysfs(struct nvme_ns *ns);
+extern const struct attribute_group nvme_nvm_attr_group;
 int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg);
 #else
 static inline void nvme_nvm_update_nvm_info(struct nvme_ns *ns) {};
@@ -562,7 +562,6 @@ static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
 }
 
 static inline void nvme_nvm_unregister(struct nvme_ns *ns) {};
-static inline void nvme_nvm_register_sysfs(struct nvme_ns *ns) {};
 static inline int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd,
 							unsigned long arg)
 {

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-08-17  7:00           ` hch
@ 2018-08-17  7:53             ` Hannes Reinecke
  -1 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-08-17  7:53 UTC (permalink / raw)
  To: hch, Bart Van Assche
  Cc: axboe, linux-kernel, keith.busch, linux-nvme, linux-block, hare, sagi

On 08/17/2018 09:00 AM, hch@lst.de wrote:
> On Tue, Aug 14, 2018 at 03:44:57PM +0000, Bart Van Assche wrote:
>> On Tue, 2018-08-14 at 17:39 +0200, Hannes Reinecke wrote:
>>> While I have considered having nvme_nvm_register_sysfs() returning a
>>> pointer I would then have to remove the 'static' declaration from the
>>> nvm_dev_attr_group_12/20.
>>> Which I didn't really like, either.
>>
>> Hmm ... I don't see why the static declaration would have to be removed from
>> nvm_dev_attr_group_12/20 if nvme_nvm_register_sysfs() would return a pointer?
>> Am I perhaps missing something?
> 
> No, I think that would be the preferable approach IFF patching the global
> table of groups would be viable.  I don't think it is, though - we can
> have both normal NVMe and LightNVM devices in the same system, so we
> can't just patch it over.
> 
> So we'll need three different attribut group arrays:
> 
> const struct attribute_group *nvme_ns_id_attr_groups[] = {
> 	&nvme_ns_id_attr_group,
> 	NULL,
> };
> 
> const struct attribute_group *lightnvm12_ns_id_attr_groups[] = {
> 	&nvme_ns_id_attr_group,
> 	&nvm_dev_attr_group_12,
> 	NULL,
> };
> 
> const struct attribute_group *lightnvm20_ns_id_attr_groups[] = {
> 	&nvme_ns_id_attr_group,
> 	&nvm_dev_attr_group_20,
> 	NULL,
> };
> 
> and a function to select which one to use.
> 
Yeah, I figured the same thing.
I'll be redoing the patchset.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-08-17  7:53             ` Hannes Reinecke
  0 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-08-17  7:53 UTC (permalink / raw)


On 08/17/2018 09:00 AM, hch@lst.de wrote:
> On Tue, Aug 14, 2018@03:44:57PM +0000, Bart Van Assche wrote:
>> On Tue, 2018-08-14@17:39 +0200, Hannes Reinecke wrote:
>>> While I have considered having nvme_nvm_register_sysfs() returning a
>>> pointer I would then have to remove the 'static' declaration from the
>>> nvm_dev_attr_group_12/20.
>>> Which I didn't really like, either.
>>
>> Hmm ... I don't see why the static declaration would have to be removed from
>> nvm_dev_attr_group_12/20 if nvme_nvm_register_sysfs() would return a pointer?
>> Am I perhaps missing something?
> 
> No, I think that would be the preferable approach IFF patching the global
> table of groups would be viable.  I don't think it is, though - we can
> have both normal NVMe and LightNVM devices in the same system, so we
> can't just patch it over.
> 
> So we'll need three different attribut group arrays:
> 
> const struct attribute_group *nvme_ns_id_attr_groups[] = {
> 	&nvme_ns_id_attr_group,
> 	NULL,
> };
> 
> const struct attribute_group *lightnvm12_ns_id_attr_groups[] = {
> 	&nvme_ns_id_attr_group,
> 	&nvm_dev_attr_group_12,
> 	NULL,
> };
> 
> const struct attribute_group *lightnvm20_ns_id_attr_groups[] = {
> 	&nvme_ns_id_attr_group,
> 	&nvm_dev_attr_group_20,
> 	NULL,
> };
> 
> and a function to select which one to use.
> 
Yeah, I figured the same thing.
I'll be redoing the patchset.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: F. Imend?rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-08-14 15:44         ` Bart Van Assche
@ 2018-08-17  7:00           ` hch
  -1 siblings, 0 replies; 61+ messages in thread
From: hch @ 2018-08-17  7:00 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: hare, axboe, hch, linux-kernel, keith.busch, linux-nvme,
	linux-block, hare, sagi

On Tue, Aug 14, 2018 at 03:44:57PM +0000, Bart Van Assche wrote:
> On Tue, 2018-08-14 at 17:39 +0200, Hannes Reinecke wrote:
> > While I have considered having nvme_nvm_register_sysfs() returning a
> > pointer I would then have to remove the 'static' declaration from the
> > nvm_dev_attr_group_12/20.
> > Which I didn't really like, either.
> 
> Hmm ... I don't see why the static declaration would have to be removed from
> nvm_dev_attr_group_12/20 if nvme_nvm_register_sysfs() would return a pointer?
> Am I perhaps missing something?

No, I think that would be the preferable approach IFF patching the global
table of groups would be viable.  I don't think it is, though - we can
have both normal NVMe and LightNVM devices in the same system, so we
can't just patch it over.

So we'll need three different attribut group arrays:

const struct attribute_group *nvme_ns_id_attr_groups[] = {
	&nvme_ns_id_attr_group,
	NULL,
};

const struct attribute_group *lightnvm12_ns_id_attr_groups[] = {
	&nvme_ns_id_attr_group,
	&nvm_dev_attr_group_12,
	NULL,
};

const struct attribute_group *lightnvm20_ns_id_attr_groups[] = {
	&nvme_ns_id_attr_group,
	&nvm_dev_attr_group_20,
	NULL,
};

and a function to select which one to use.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-08-17  7:00           ` hch
  0 siblings, 0 replies; 61+ messages in thread
From: hch @ 2018-08-17  7:00 UTC (permalink / raw)


On Tue, Aug 14, 2018@03:44:57PM +0000, Bart Van Assche wrote:
> On Tue, 2018-08-14@17:39 +0200, Hannes Reinecke wrote:
> > While I have considered having nvme_nvm_register_sysfs() returning a
> > pointer I would then have to remove the 'static' declaration from the
> > nvm_dev_attr_group_12/20.
> > Which I didn't really like, either.
> 
> Hmm ... I don't see why the static declaration would have to be removed from
> nvm_dev_attr_group_12/20 if nvme_nvm_register_sysfs() would return a pointer?
> Am I perhaps missing something?

No, I think that would be the preferable approach IFF patching the global
table of groups would be viable.  I don't think it is, though - we can
have both normal NVMe and LightNVM devices in the same system, so we
can't just patch it over.

So we'll need three different attribut group arrays:

const struct attribute_group *nvme_ns_id_attr_groups[] = {
	&nvme_ns_id_attr_group,
	NULL,
};

const struct attribute_group *lightnvm12_ns_id_attr_groups[] = {
	&nvme_ns_id_attr_group,
	&nvm_dev_attr_group_12,
	NULL,
};

const struct attribute_group *lightnvm20_ns_id_attr_groups[] = {
	&nvme_ns_id_attr_group,
	&nvm_dev_attr_group_20,
	NULL,
};

and a function to select which one to use.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-08-14  7:33   ` Hannes Reinecke
@ 2018-08-17  6:55     ` Christoph Hellwig
  -1 siblings, 0 replies; 61+ messages in thread
From: Christoph Hellwig @ 2018-08-17  6:55 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Jens Axboe, Christoph Hellwig, Sagi Grimberg, Keith Busch,
	Bart van Assche, linux-nvme, linux-block,
	Linux Kernel Mailinglist, Hannes Reinecke

On Tue, Aug 14, 2018 at 09:33:02AM +0200, Hannes Reinecke wrote:
> We should be registering the ns_id attribute as default sysfs
> attribute groups, otherwise we have a race condition between
> the uevent and the attributes appearing in sysfs.
> 
> Signed-off-by: Hannes Reinecke <hare@suse.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>

I only reviewed the original patch without the lightnvm additions.
Please drop reviewed-by tags if you make non-trivial changes.

In this case I also wonder if you should have just kept the original
patch as-is and added a new one for lightnvm.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-08-17  6:55     ` Christoph Hellwig
  0 siblings, 0 replies; 61+ messages in thread
From: Christoph Hellwig @ 2018-08-17  6:55 UTC (permalink / raw)


On Tue, Aug 14, 2018@09:33:02AM +0200, Hannes Reinecke wrote:
> We should be registering the ns_id attribute as default sysfs
> attribute groups, otherwise we have a race condition between
> the uevent and the attributes appearing in sysfs.
> 
> Signed-off-by: Hannes Reinecke <hare at suse.com>
> Reviewed-by: Christoph Hellwig <hch at lst.de>

I only reviewed the original patch without the lightnvm additions.
Please drop reviewed-by tags if you make non-trivial changes.

In this case I also wonder if you should have just kept the original
patch as-is and added a new one for lightnvm.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-08-14  7:33   ` Hannes Reinecke
@ 2018-08-14 21:53     ` kbuild test robot
  -1 siblings, 0 replies; 61+ messages in thread
From: kbuild test robot @ 2018-08-14 21:53 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: kbuild-all, Jens Axboe, Christoph Hellwig, Sagi Grimberg,
	Keith Busch, Bart van Assche, linux-nvme, linux-block,
	Linux Kernel Mailinglist, Hannes Reinecke, Hannes Reinecke

Hi Hannes,

I love your patch! Perhaps something to improve:

[auto build test WARNING on block/for-next]
[also build test WARNING on next-20180814]
[cannot apply to v4.18]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Hannes-Reinecke/genhd-register-default-groups-with-device_add_disk/20180815-034942
base:   https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git for-next
reproduce:
        # apt-get install sparse
        make ARCH=x86_64 allmodconfig
        make C=1 CF=-D__CHECK_ENDIAN__


sparse warnings: (new ones prefixed by >>)

   include/uapi/linux/perf_event.h:147:56: sparse: cast truncates bits from constant value (8000000000000000 becomes 0)
   drivers/nvme/host/core.c:506:28: sparse: expression using sizeof(void)
   include/linux/blkdev.h:1269:16: sparse: expression using sizeof(void)
   include/linux/slab.h:631:13: sparse: undefined identifier '__builtin_mul_overflow'
   drivers/nvme/host/core.c:1062:32: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:1062:32: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:1063:26: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:1063:26: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:1844:32: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:1844:32: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:1846:43: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2406:17: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2406:17: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
>> drivers/nvme/host/core.c:2732:30: sparse: symbol 'nvme_ns_id_attr_group' was not declared. Should it be static?
   drivers/nvme/host/core.c:3203:33: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:3566:17: sparse: expression using sizeof(void)
   include/linux/slab.h:631:13: sparse: call with no type!
   drivers/nvme/host/core.c:1264:23: sparse: context imbalance in 'nvme_get_ns_from_disk' - wrong count at exit
   drivers/nvme/host/core.c:1282:33: sparse: context imbalance in 'nvme_put_ns_from_disk' - unexpected unlock

Please review and possibly fold the followup patch.

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-08-14 21:53     ` kbuild test robot
  0 siblings, 0 replies; 61+ messages in thread
From: kbuild test robot @ 2018-08-14 21:53 UTC (permalink / raw)


Hi Hannes,

I love your patch! Perhaps something to improve:

[auto build test WARNING on block/for-next]
[also build test WARNING on next-20180814]
[cannot apply to v4.18]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Hannes-Reinecke/genhd-register-default-groups-with-device_add_disk/20180815-034942
base:   https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git for-next
reproduce:
        # apt-get install sparse
        make ARCH=x86_64 allmodconfig
        make C=1 CF=-D__CHECK_ENDIAN__


sparse warnings: (new ones prefixed by >>)

   include/uapi/linux/perf_event.h:147:56: sparse: cast truncates bits from constant value (8000000000000000 becomes 0)
   drivers/nvme/host/core.c:506:28: sparse: expression using sizeof(void)
   include/linux/blkdev.h:1269:16: sparse: expression using sizeof(void)
   include/linux/slab.h:631:13: sparse: undefined identifier '__builtin_mul_overflow'
   drivers/nvme/host/core.c:1062:32: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:1062:32: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:1063:26: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:1063:26: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:1844:32: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:1844:32: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:1846:43: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2406:17: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2406:17: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
>> drivers/nvme/host/core.c:2732:30: sparse: symbol 'nvme_ns_id_attr_group' was not declared. Should it be static?
   drivers/nvme/host/core.c:3203:33: sparse: expression using sizeof(void)
   drivers/nvme/host/core.c:3566:17: sparse: expression using sizeof(void)
   include/linux/slab.h:631:13: sparse: call with no type!
   drivers/nvme/host/core.c:1264:23: sparse: context imbalance in 'nvme_get_ns_from_disk' - wrong count at exit
   drivers/nvme/host/core.c:1282:33: sparse: context imbalance in 'nvme_put_ns_from_disk' - unexpected unlock

Please review and possibly fold the followup patch.

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-08-14 15:39       ` Hannes Reinecke
  (?)
@ 2018-08-14 15:44         ` Bart Van Assche
  -1 siblings, 0 replies; 61+ messages in thread
From: Bart Van Assche @ 2018-08-14 15:44 UTC (permalink / raw)
  To: hare, axboe
  Cc: hch, linux-kernel, keith.busch, linux-nvme, linux-block, hare, sagi

On Tue, 2018-08-14 at 17:39 +0200, Hannes Reinecke wrote:
> While I have considered having nvme_nvm_register_sysfs() =
returning a
> pointer I would then have to remove the 'static' declaration from the
> nvm_dev_attr_group_12/20.
> Which I didn't really like, either.

Hmm ... I don't see why the static declaration would have to be removed fro=
m
nvm_dev_attr_group_12/20 if nvme_nvm_register_s=
ysfs() would return a pointer?
Am I perhaps missing something?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-08-14 15:44         ` Bart Van Assche
  0 siblings, 0 replies; 61+ messages in thread
From: Bart Van Assche @ 2018-08-14 15:44 UTC (permalink / raw)
  To: hare, axboe
  Cc: hch, linux-kernel, keith.busch, linux-nvme, linux-block, hare, sagi

On Tue, 2018-08-14 at 17:39 +0200, Hannes Reinecke wrote:
> While I have considered having nvme_nvm_register_sysfs() returning a
> pointer I would then have to remove the 'static' declaration from the
> nvm_dev_attr_group_12/20.
> Which I didn't really like, either.

Hmm ... I don't see why the static declaration would have to be removed from
nvm_dev_attr_group_12/20 if nvme_nvm_register_sysfs() would return a pointer?
Am I perhaps missing something?

Thanks,

Bart.


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-08-14 15:44         ` Bart Van Assche
  0 siblings, 0 replies; 61+ messages in thread
From: Bart Van Assche @ 2018-08-14 15:44 UTC (permalink / raw)


On Tue, 2018-08-14@17:39 +0200, Hannes Reinecke wrote:
> While I have considered having nvme_nvm_register_sysfs() returning a
> pointer I would then have to remove the 'static' declaration from the
> nvm_dev_attr_group_12/20.
> Which I didn't really like, either.

Hmm ... I don't see why the static declaration would have to be removed from
nvm_dev_attr_group_12/20 if nvme_nvm_register_sysfs() would return a pointer?
Am I perhaps missing something?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-08-14 15:20     ` Bart Van Assche
@ 2018-08-14 15:39       ` Hannes Reinecke
  -1 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-08-14 15:39 UTC (permalink / raw)
  To: Bart Van Assche, axboe
  Cc: hch, linux-kernel, keith.busch, linux-nvme, linux-block, hare, sagi

On 08/14/2018 05:20 PM, Bart Van Assche wrote:
> On Tue, 2018-08-14 at 09:33 +0200, Hannes Reinecke wrote:
>> +const struct attribute_group *nvme_ns_id_attr_groups[] = {
>> +	&nvme_ns_id_attr_group,
>> +	NULL, /* Will be filled in by lightnvm if present */
>> +	NULL,
>> +};
> [ ... ]
>> -void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
>> -{
>> -	struct nvm_dev *ndev = ns->ndev;
>> -	struct nvm_geo *geo = &ndev->geo;
>> +		return;
>>  
>>  	switch (geo->major_ver_id) {
>>  	case 1:
>> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
>> -					&nvm_dev_attr_group_12);
>> +		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_12;
>>  		break;
>>  	case 2:
>> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
>> -					&nvm_dev_attr_group_20);
>> +		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_20;
> 
> This patch introduces a really ugly dependency between the NVMe core code and
> the lightnvm code, namely that the lightnvm code has to know at which position
> in the nvme_ns_id_attr_groups it can fill in its attribute group pointer. Have
> you considered to make nvme_nvm_register_sysfs() return an attribute group
> pointer such that the nvme_ns_id_attr_groups can be changed from a global into
> a static array?
> 
Wouldn't help much, as the 'nvme_ns_id_attr_groups' needs to be a global
pointer anyway as it's references from drivers/nvme/host/core.c and
drivers/nvme/host/multipath.c.

While I have considered having nvme_nvm_register_sysfs() returning a
pointer I would then have to remove the 'static' declaration from the
nvm_dev_attr_group_12/20.
Which I didn't really like, either.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-08-14 15:39       ` Hannes Reinecke
  0 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-08-14 15:39 UTC (permalink / raw)


On 08/14/2018 05:20 PM, Bart Van Assche wrote:
> On Tue, 2018-08-14@09:33 +0200, Hannes Reinecke wrote:
>> +const struct attribute_group *nvme_ns_id_attr_groups[] = {
>> +	&nvme_ns_id_attr_group,
>> +	NULL, /* Will be filled in by lightnvm if present */
>> +	NULL,
>> +};
> [ ... ]
>> -void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
>> -{
>> -	struct nvm_dev *ndev = ns->ndev;
>> -	struct nvm_geo *geo = &ndev->geo;
>> +		return;
>>  
>>  	switch (geo->major_ver_id) {
>>  	case 1:
>> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
>> -					&nvm_dev_attr_group_12);
>> +		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_12;
>>  		break;
>>  	case 2:
>> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
>> -					&nvm_dev_attr_group_20);
>> +		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_20;
> 
> This patch introduces a really ugly dependency between the NVMe core code and
> the lightnvm code, namely that the lightnvm code has to know at which position
> in the nvme_ns_id_attr_groups it can fill in its attribute group pointer. Have
> you considered to make nvme_nvm_register_sysfs() return an attribute group
> pointer such that the nvme_ns_id_attr_groups can be changed from a global into
> a static array?
> 
Wouldn't help much, as the 'nvme_ns_id_attr_groups' needs to be a global
pointer anyway as it's references from drivers/nvme/host/core.c and
drivers/nvme/host/multipath.c.

While I have considered having nvme_nvm_register_sysfs() returning a
pointer I would then have to remove the 'static' declaration from the
nvm_dev_attr_group_12/20.
Which I didn't really like, either.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare at suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: F. Imend?rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG N?rnberg)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-08-14  7:33   ` Hannes Reinecke
  (?)
@ 2018-08-14 15:20     ` Bart Van Assche
  -1 siblings, 0 replies; 61+ messages in thread
From: Bart Van Assche @ 2018-08-14 15:20 UTC (permalink / raw)
  To: hare, axboe
  Cc: hch, linux-kernel, keith.busch, linux-nvme, linux-block, hare, sagi

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-7", Size: 1701 bytes --]

On Tue, 2018-08-14 at 09:33 +-0200, Hannes Reinecke wrote:
+AD4- +-const struct attribute+AF8-group +ACo-nvme+AF8-ns+AF8-id+AF8-attr+A=
F8-groups+AFsAXQ- +AD0- +AHs-
+AD4- +-	+ACY-nvme+AF8-ns+AF8-id+AF8-attr+AF8-group,
+AD4- +-	NULL, /+ACo- Will be filled in by lightnvm if present +ACo-/
+AD4- +-	NULL,
+AD4- +-+AH0AOw-
+AFs- ... +AF0-
+AD4- -void nvme+AF8-nvm+AF8-unregister+AF8-sysfs(struct nvme+AF8-ns +ACo-n=
s)
+AD4- -+AHs-
+AD4- -	struct nvm+AF8-dev +ACo-ndev +AD0- ns-+AD4-ndev+ADs-
+AD4- -	struct nvm+AF8-geo +ACo-geo +AD0- +ACY-ndev-+AD4-geo+ADs-
+AD4- +-		return+ADs-
+AD4- =20
+AD4-  	switch (geo-+AD4-major+AF8-ver+AF8-id) +AHs-
+AD4-  	case 1:
+AD4- -		sysfs+AF8-remove+AF8-group(+ACY-disk+AF8-to+AF8-dev(ns-+AD4-disk)-=
+AD4-kobj,
+AD4- -					+ACY-nvm+AF8-dev+AF8-attr+AF8-group+AF8-12)+ADs-
+AD4- +-		nvme+AF8-ns+AF8-id+AF8-attr+AF8-groups+AFs-1+AF0- +AD0- +ACY-nvm+=
AF8-dev+AF8-attr+AF8-group+AF8-12+ADs-
+AD4-  		break+ADs-
+AD4-  	case 2:
+AD4- -		sysfs+AF8-remove+AF8-group(+ACY-disk+AF8-to+AF8-dev(ns-+AD4-disk)-=
+AD4-kobj,
+AD4- -					+ACY-nvm+AF8-dev+AF8-attr+AF8-group+AF8-20)+ADs-
+AD4- +-		nvme+AF8-ns+AF8-id+AF8-attr+AF8-groups+AFs-1+AF0- +AD0- +ACY-nvm+=
AF8-dev+AF8-attr+AF8-group+AF8-20+ADs-

This patch introduces a really ugly dependency between the NVMe core code a=
nd
the lightnvm code, namely that the lightnvm code has to know at which posit=
ion
in the nvme+AF8-ns+AF8-id+AF8-attr+AF8-groups it can fill in its attribute =
group pointer. Have
you considered to make nvme+AF8-nvm+AF8-register+AF8-sysfs() return an attr=
ibute group
pointer such that the nvme+AF8-ns+AF8-id+AF8-attr+AF8-groups can be changed=
 from a global into
a static array?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-08-14 15:20     ` Bart Van Assche
  0 siblings, 0 replies; 61+ messages in thread
From: Bart Van Assche @ 2018-08-14 15:20 UTC (permalink / raw)
  To: hare, axboe
  Cc: hch, linux-kernel, keith.busch, linux-nvme, linux-block, hare, sagi

On Tue, 2018-08-14 at 09:33 +0200, Hannes Reinecke wrote:
> +const struct attribute_group *nvme_ns_id_attr_groups[] = {
> +	&nvme_ns_id_attr_group,
> +	NULL, /* Will be filled in by lightnvm if present */
> +	NULL,
> +};
[ ... ]
> -void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
> -{
> -	struct nvm_dev *ndev = ns->ndev;
> -	struct nvm_geo *geo = &ndev->geo;
> +		return;
>  
>  	switch (geo->major_ver_id) {
>  	case 1:
> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_12);
> +		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_12;
>  		break;
>  	case 2:
> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_20);
> +		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_20;

This patch introduces a really ugly dependency between the NVMe core code and
the lightnvm code, namely that the lightnvm code has to know at which position
in the nvme_ns_id_attr_groups it can fill in its attribute group pointer. Have
you considered to make nvme_nvm_register_sysfs() return an attribute group
pointer such that the nvme_ns_id_attr_groups can be changed from a global into
a static array?

Thanks,

Bart.


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-08-14 15:20     ` Bart Van Assche
  0 siblings, 0 replies; 61+ messages in thread
From: Bart Van Assche @ 2018-08-14 15:20 UTC (permalink / raw)


On Tue, 2018-08-14@09:33 +0200, Hannes Reinecke wrote:
> +const struct attribute_group *nvme_ns_id_attr_groups[] = {
> +	&nvme_ns_id_attr_group,
> +	NULL, /* Will be filled in by lightnvm if present */
> +	NULL,
> +};
[ ... ]
> -void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
> -{
> -	struct nvm_dev *ndev = ns->ndev;
> -	struct nvm_geo *geo = &ndev->geo;
> +		return;
>  
>  	switch (geo->major_ver_id) {
>  	case 1:
> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_12);
> +		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_12;
>  		break;
>  	case 2:
> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_20);
> +		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_20;

This patch introduces a really ugly dependency between the NVMe core code and
the lightnvm code, namely that the lightnvm code has to know at which position
in the nvme_ns_id_attr_groups it can fill in its attribute group pointer. Have
you considered to make nvme_nvm_register_sysfs() return an attribute group
pointer such that the nvme_ns_id_attr_groups can be changed from a global into
a static array?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-08-14  7:33   ` Hannes Reinecke
@ 2018-08-14  9:59     ` Matias Bjørling
  -1 siblings, 0 replies; 61+ messages in thread
From: Matias Bjørling @ 2018-08-14  9:59 UTC (permalink / raw)
  To: hare, axboe
  Cc: hch, sagi, keith.busch, bart.vanassche, linux-nvme, linux-block,
	linux-kernel, hare

On 08/14/2018 09:33 AM, Hannes Reinecke wrote:
> We should be registering the ns_id attribute as default sysfs
> attribute groups, otherwise we have a race condition between
> the uevent and the attributes appearing in sysfs.
> 
> Signed-off-by: Hannes Reinecke <hare@suse.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/nvme/host/core.c      | 21 +++++++++------------
>   drivers/nvme/host/lightnvm.c  | 27 ++++-----------------------
>   drivers/nvme/host/multipath.c | 15 ++++-----------
>   drivers/nvme/host/nvme.h      | 11 +++--------
>   4 files changed, 20 insertions(+), 54 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 0e824e8c8fd7..8e26d98e9a8f 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -2734,6 +2734,12 @@ const struct attribute_group nvme_ns_id_attr_group = {
>   	.is_visible	= nvme_ns_id_attrs_are_visible,
>   };
>   
> +const struct attribute_group *nvme_ns_id_attr_groups[] = {
> +	&nvme_ns_id_attr_group,
> +	NULL, /* Will be filled in by lightnvm if present */
> +	NULL,
> +};
> +
>   #define nvme_show_str_function(field)						\
>   static ssize_t  field##_show(struct device *dev,				\
>   			    struct device_attribute *attr, char *buf)		\
> @@ -3099,14 +3105,9 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
>   
>   	nvme_get_ctrl(ctrl);
>   
> -	device_add_disk(ctrl->device, ns->disk, NULL);
> -	if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvme_ns_id_attr_group))
> -		pr_warn("%s: failed to create sysfs group for identification\n",
> -			ns->disk->disk_name);
> -	if (ns->ndev && nvme_nvm_register_sysfs(ns))
> -		pr_warn("%s: failed to register lightnvm sysfs group for identification\n",
> -			ns->disk->disk_name);
> +	if (ns->ndev)
> +		nvme_nvm_register_sysfs(ns);
> +	device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
>   
>   	nvme_mpath_add_disk(ns, id);
>   	nvme_fault_inject_init(ns);
> @@ -3132,10 +3133,6 @@ static void nvme_ns_remove(struct nvme_ns *ns)
>   
>   	nvme_fault_inject_fini(ns);
>   	if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvme_ns_id_attr_group);
> -		if (ns->ndev)
> -			nvme_nvm_unregister_sysfs(ns);
>   		del_gendisk(ns->disk);
>   		blk_cleanup_queue(ns->queue);
>   		if (blk_get_integrity(ns->disk))
> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
> index 6fe5923c95d4..7bf2f9da6293 100644
> --- a/drivers/nvme/host/lightnvm.c
> +++ b/drivers/nvme/host/lightnvm.c
> @@ -1270,39 +1270,20 @@ static const struct attribute_group nvm_dev_attr_group_20 = {
>   	.attrs		= nvm_dev_attrs_20,
>   };
>   
> -int nvme_nvm_register_sysfs(struct nvme_ns *ns)
> +void nvme_nvm_register_sysfs(struct nvme_ns *ns)
>   {
>   	struct nvm_dev *ndev = ns->ndev;
>   	struct nvm_geo *geo = &ndev->geo;
>   
>   	if (!ndev)
> -		return -EINVAL;
> -
> -	switch (geo->major_ver_id) {
> -	case 1:
> -		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_12);
> -	case 2:
> -		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_20);
> -	}
> -
> -	return -EINVAL;
> -}
> -
> -void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
> -{
> -	struct nvm_dev *ndev = ns->ndev;
> -	struct nvm_geo *geo = &ndev->geo;
> +		return;
>   
>   	switch (geo->major_ver_id) {
>   	case 1:
> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_12);
> +		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_12;
>   		break;
>   	case 2:
> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_20);
> +		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_20;
>   		break;
>   	}
>   }
> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> index 477af51d01e8..8e846095c42d 100644
> --- a/drivers/nvme/host/multipath.c
> +++ b/drivers/nvme/host/multipath.c
> @@ -282,13 +282,9 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
>   	if (!head->disk)
>   		return;
>   
> -	if (!(head->disk->flags & GENHD_FL_UP)) {
> -		device_add_disk(&head->subsys->dev, head->disk, NULL);
> -		if (sysfs_create_group(&disk_to_dev(head->disk)->kobj,
> -				&nvme_ns_id_attr_group))
> -			dev_warn(&head->subsys->dev,
> -				 "failed to create id group.\n");
> -	}
> +	if (!(head->disk->flags & GENHD_FL_UP))
> +		device_add_disk(&head->subsys->dev, head->disk,
> +				nvme_ns_id_attr_groups);
>   
>   	kblockd_schedule_work(&ns->head->requeue_work);
>   }
> @@ -494,11 +490,8 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
>   {
>   	if (!head->disk)
>   		return;
> -	if (head->disk->flags & GENHD_FL_UP) {
> -		sysfs_remove_group(&disk_to_dev(head->disk)->kobj,
> -				   &nvme_ns_id_attr_group);
> +	if (head->disk->flags & GENHD_FL_UP)
>   		del_gendisk(head->disk);
> -	}
>   	blk_set_queue_dying(head->disk->queue);
>   	/* make sure all pending bios are cleaned up */
>   	kblockd_schedule_work(&head->requeue_work);
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index bb4a2003c097..9ba6d67d8e0a 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -459,7 +459,7 @@ int nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl);
>   int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp,
>   		void *log, size_t size, u64 offset);
>   
> -extern const struct attribute_group nvme_ns_id_attr_group;
> +extern const struct attribute_group *nvme_ns_id_attr_groups[];
>   extern const struct block_device_operations nvme_ns_head_ops;
>   
>   #ifdef CONFIG_NVME_MULTIPATH
> @@ -551,8 +551,7 @@ static inline void nvme_mpath_stop(struct nvme_ctrl *ctrl)
>   void nvme_nvm_update_nvm_info(struct nvme_ns *ns);
>   int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
>   void nvme_nvm_unregister(struct nvme_ns *ns);
> -int nvme_nvm_register_sysfs(struct nvme_ns *ns);
> -void nvme_nvm_unregister_sysfs(struct nvme_ns *ns);
> +void nvme_nvm_register_sysfs(struct nvme_ns *ns);
>   int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg);
>   #else
>   static inline void nvme_nvm_update_nvm_info(struct nvme_ns *ns) {};
> @@ -563,11 +562,7 @@ static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
>   }
>   
>   static inline void nvme_nvm_unregister(struct nvme_ns *ns) {};
> -static inline int nvme_nvm_register_sysfs(struct nvme_ns *ns)
> -{
> -	return 0;
> -}
> -static inline void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) {};
> +static inline void nvme_nvm_register_sysfs(struct nvme_ns *ns) {};
>   static inline int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd,
>   							unsigned long arg)
>   {
> 

Thanks Hannes.

Acked-by: Matias Bjørling <mb@lightnvm.io>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-08-14  9:59     ` Matias Bjørling
  0 siblings, 0 replies; 61+ messages in thread
From: Matias Bjørling @ 2018-08-14  9:59 UTC (permalink / raw)


On 08/14/2018 09:33 AM, Hannes Reinecke wrote:
> We should be registering the ns_id attribute as default sysfs
> attribute groups, otherwise we have a race condition between
> the uevent and the attributes appearing in sysfs.
> 
> Signed-off-by: Hannes Reinecke <hare at suse.com>
> Reviewed-by: Christoph Hellwig <hch at lst.de>
> ---
>   drivers/nvme/host/core.c      | 21 +++++++++------------
>   drivers/nvme/host/lightnvm.c  | 27 ++++-----------------------
>   drivers/nvme/host/multipath.c | 15 ++++-----------
>   drivers/nvme/host/nvme.h      | 11 +++--------
>   4 files changed, 20 insertions(+), 54 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 0e824e8c8fd7..8e26d98e9a8f 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -2734,6 +2734,12 @@ const struct attribute_group nvme_ns_id_attr_group = {
>   	.is_visible	= nvme_ns_id_attrs_are_visible,
>   };
>   
> +const struct attribute_group *nvme_ns_id_attr_groups[] = {
> +	&nvme_ns_id_attr_group,
> +	NULL, /* Will be filled in by lightnvm if present */
> +	NULL,
> +};
> +
>   #define nvme_show_str_function(field)						\
>   static ssize_t  field##_show(struct device *dev,				\
>   			    struct device_attribute *attr, char *buf)		\
> @@ -3099,14 +3105,9 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
>   
>   	nvme_get_ctrl(ctrl);
>   
> -	device_add_disk(ctrl->device, ns->disk, NULL);
> -	if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvme_ns_id_attr_group))
> -		pr_warn("%s: failed to create sysfs group for identification\n",
> -			ns->disk->disk_name);
> -	if (ns->ndev && nvme_nvm_register_sysfs(ns))
> -		pr_warn("%s: failed to register lightnvm sysfs group for identification\n",
> -			ns->disk->disk_name);
> +	if (ns->ndev)
> +		nvme_nvm_register_sysfs(ns);
> +	device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
>   
>   	nvme_mpath_add_disk(ns, id);
>   	nvme_fault_inject_init(ns);
> @@ -3132,10 +3133,6 @@ static void nvme_ns_remove(struct nvme_ns *ns)
>   
>   	nvme_fault_inject_fini(ns);
>   	if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvme_ns_id_attr_group);
> -		if (ns->ndev)
> -			nvme_nvm_unregister_sysfs(ns);
>   		del_gendisk(ns->disk);
>   		blk_cleanup_queue(ns->queue);
>   		if (blk_get_integrity(ns->disk))
> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
> index 6fe5923c95d4..7bf2f9da6293 100644
> --- a/drivers/nvme/host/lightnvm.c
> +++ b/drivers/nvme/host/lightnvm.c
> @@ -1270,39 +1270,20 @@ static const struct attribute_group nvm_dev_attr_group_20 = {
>   	.attrs		= nvm_dev_attrs_20,
>   };
>   
> -int nvme_nvm_register_sysfs(struct nvme_ns *ns)
> +void nvme_nvm_register_sysfs(struct nvme_ns *ns)
>   {
>   	struct nvm_dev *ndev = ns->ndev;
>   	struct nvm_geo *geo = &ndev->geo;
>   
>   	if (!ndev)
> -		return -EINVAL;
> -
> -	switch (geo->major_ver_id) {
> -	case 1:
> -		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_12);
> -	case 2:
> -		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_20);
> -	}
> -
> -	return -EINVAL;
> -}
> -
> -void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
> -{
> -	struct nvm_dev *ndev = ns->ndev;
> -	struct nvm_geo *geo = &ndev->geo;
> +		return;
>   
>   	switch (geo->major_ver_id) {
>   	case 1:
> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_12);
> +		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_12;
>   		break;
>   	case 2:
> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_20);
> +		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_20;
>   		break;
>   	}
>   }
> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> index 477af51d01e8..8e846095c42d 100644
> --- a/drivers/nvme/host/multipath.c
> +++ b/drivers/nvme/host/multipath.c
> @@ -282,13 +282,9 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
>   	if (!head->disk)
>   		return;
>   
> -	if (!(head->disk->flags & GENHD_FL_UP)) {
> -		device_add_disk(&head->subsys->dev, head->disk, NULL);
> -		if (sysfs_create_group(&disk_to_dev(head->disk)->kobj,
> -				&nvme_ns_id_attr_group))
> -			dev_warn(&head->subsys->dev,
> -				 "failed to create id group.\n");
> -	}
> +	if (!(head->disk->flags & GENHD_FL_UP))
> +		device_add_disk(&head->subsys->dev, head->disk,
> +				nvme_ns_id_attr_groups);
>   
>   	kblockd_schedule_work(&ns->head->requeue_work);
>   }
> @@ -494,11 +490,8 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
>   {
>   	if (!head->disk)
>   		return;
> -	if (head->disk->flags & GENHD_FL_UP) {
> -		sysfs_remove_group(&disk_to_dev(head->disk)->kobj,
> -				   &nvme_ns_id_attr_group);
> +	if (head->disk->flags & GENHD_FL_UP)
>   		del_gendisk(head->disk);
> -	}
>   	blk_set_queue_dying(head->disk->queue);
>   	/* make sure all pending bios are cleaned up */
>   	kblockd_schedule_work(&head->requeue_work);
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index bb4a2003c097..9ba6d67d8e0a 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -459,7 +459,7 @@ int nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl);
>   int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp,
>   		void *log, size_t size, u64 offset);
>   
> -extern const struct attribute_group nvme_ns_id_attr_group;
> +extern const struct attribute_group *nvme_ns_id_attr_groups[];
>   extern const struct block_device_operations nvme_ns_head_ops;
>   
>   #ifdef CONFIG_NVME_MULTIPATH
> @@ -551,8 +551,7 @@ static inline void nvme_mpath_stop(struct nvme_ctrl *ctrl)
>   void nvme_nvm_update_nvm_info(struct nvme_ns *ns);
>   int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
>   void nvme_nvm_unregister(struct nvme_ns *ns);
> -int nvme_nvm_register_sysfs(struct nvme_ns *ns);
> -void nvme_nvm_unregister_sysfs(struct nvme_ns *ns);
> +void nvme_nvm_register_sysfs(struct nvme_ns *ns);
>   int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg);
>   #else
>   static inline void nvme_nvm_update_nvm_info(struct nvme_ns *ns) {};
> @@ -563,11 +562,7 @@ static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
>   }
>   
>   static inline void nvme_nvm_unregister(struct nvme_ns *ns) {};
> -static inline int nvme_nvm_register_sysfs(struct nvme_ns *ns)
> -{
> -	return 0;
> -}
> -static inline void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) {};
> +static inline void nvme_nvm_register_sysfs(struct nvme_ns *ns) {};
>   static inline int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd,
>   							unsigned long arg)
>   {
> 

Thanks Hannes.

Acked-by: Matias Bj?rling <mb at lightnvm.io>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-08-14  7:33   ` Hannes Reinecke
@ 2018-08-14  9:03     ` Javier González
  -1 siblings, 0 replies; 61+ messages in thread
From: Javier González @ 2018-08-14  9:03 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Jens Axboe, linux-block, Hannes Reinecke, Sagi Grimberg,
	Linux Kernel Mailinglist, linux-nvme, Keith Busch,
	Bart van Assche, Christoph Hellwig

[-- Attachment #1: Type: text/plain, Size: 6945 bytes --]

> 
> On 14 Aug 2018, at 09.33, Hannes Reinecke <hare@suse.de> wrote:
> 
> We should be registering the ns_id attribute as default sysfs
> attribute groups, otherwise we have a race condition between
> the uevent and the attributes appearing in sysfs.
> 
> Signed-off-by: Hannes Reinecke <hare@suse.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> ---
> drivers/nvme/host/core.c      | 21 +++++++++------------
> drivers/nvme/host/lightnvm.c  | 27 ++++-----------------------
> drivers/nvme/host/multipath.c | 15 ++++-----------
> drivers/nvme/host/nvme.h      | 11 +++--------
> 4 files changed, 20 insertions(+), 54 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 0e824e8c8fd7..8e26d98e9a8f 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -2734,6 +2734,12 @@ const struct attribute_group nvme_ns_id_attr_group = {
> 	.is_visible	= nvme_ns_id_attrs_are_visible,
> };
> 
> +const struct attribute_group *nvme_ns_id_attr_groups[] = {
> +	&nvme_ns_id_attr_group,
> +	NULL, /* Will be filled in by lightnvm if present */
> +	NULL,
> +};
> +
> #define nvme_show_str_function(field)						\
> static ssize_t  field##_show(struct device *dev,				\
> 			    struct device_attribute *attr, char *buf)		\
> @@ -3099,14 +3105,9 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
> 
> 	nvme_get_ctrl(ctrl);
> 
> -	device_add_disk(ctrl->device, ns->disk, NULL);
> -	if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvme_ns_id_attr_group))
> -		pr_warn("%s: failed to create sysfs group for identification\n",
> -			ns->disk->disk_name);
> -	if (ns->ndev && nvme_nvm_register_sysfs(ns))
> -		pr_warn("%s: failed to register lightnvm sysfs group for identification\n",
> -			ns->disk->disk_name);
> +	if (ns->ndev)
> +		nvme_nvm_register_sysfs(ns);
> +	device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
> 
> 	nvme_mpath_add_disk(ns, id);
> 	nvme_fault_inject_init(ns);
> @@ -3132,10 +3133,6 @@ static void nvme_ns_remove(struct nvme_ns *ns)
> 
> 	nvme_fault_inject_fini(ns);
> 	if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvme_ns_id_attr_group);
> -		if (ns->ndev)
> -			nvme_nvm_unregister_sysfs(ns);
> 		del_gendisk(ns->disk);
> 		blk_cleanup_queue(ns->queue);
> 		if (blk_get_integrity(ns->disk))
> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
> index 6fe5923c95d4..7bf2f9da6293 100644
> --- a/drivers/nvme/host/lightnvm.c
> +++ b/drivers/nvme/host/lightnvm.c
> @@ -1270,39 +1270,20 @@ static const struct attribute_group nvm_dev_attr_group_20 = {
> 	.attrs		= nvm_dev_attrs_20,
> };
> 
> -int nvme_nvm_register_sysfs(struct nvme_ns *ns)
> +void nvme_nvm_register_sysfs(struct nvme_ns *ns)
> {
> 	struct nvm_dev *ndev = ns->ndev;
> 	struct nvm_geo *geo = &ndev->geo;
> 
> 	if (!ndev)
> -		return -EINVAL;
> -
> -	switch (geo->major_ver_id) {
> -	case 1:
> -		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_12);
> -	case 2:
> -		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_20);
> -	}
> -
> -	return -EINVAL;
> -}
> -
> -void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
> -{
> -	struct nvm_dev *ndev = ns->ndev;
> -	struct nvm_geo *geo = &ndev->geo;
> +		return;
> 
> 	switch (geo->major_ver_id) {
> 	case 1:
> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_12);
> +		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_12;
> 		break;
> 	case 2:
> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_20);
> +		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_20;
> 		break;
> 	}
> }
> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> index 477af51d01e8..8e846095c42d 100644
> --- a/drivers/nvme/host/multipath.c
> +++ b/drivers/nvme/host/multipath.c
> @@ -282,13 +282,9 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
> 	if (!head->disk)
> 		return;
> 
> -	if (!(head->disk->flags & GENHD_FL_UP)) {
> -		device_add_disk(&head->subsys->dev, head->disk, NULL);
> -		if (sysfs_create_group(&disk_to_dev(head->disk)->kobj,
> -				&nvme_ns_id_attr_group))
> -			dev_warn(&head->subsys->dev,
> -				 "failed to create id group.\n");
> -	}
> +	if (!(head->disk->flags & GENHD_FL_UP))
> +		device_add_disk(&head->subsys->dev, head->disk,
> +				nvme_ns_id_attr_groups);
> 
> 	kblockd_schedule_work(&ns->head->requeue_work);
> }
> @@ -494,11 +490,8 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
> {
> 	if (!head->disk)
> 		return;
> -	if (head->disk->flags & GENHD_FL_UP) {
> -		sysfs_remove_group(&disk_to_dev(head->disk)->kobj,
> -				   &nvme_ns_id_attr_group);
> +	if (head->disk->flags & GENHD_FL_UP)
> 		del_gendisk(head->disk);
> -	}
> 	blk_set_queue_dying(head->disk->queue);
> 	/* make sure all pending bios are cleaned up */
> 	kblockd_schedule_work(&head->requeue_work);
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index bb4a2003c097..9ba6d67d8e0a 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -459,7 +459,7 @@ int nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl);
> int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp,
> 		void *log, size_t size, u64 offset);
> 
> -extern const struct attribute_group nvme_ns_id_attr_group;
> +extern const struct attribute_group *nvme_ns_id_attr_groups[];
> extern const struct block_device_operations nvme_ns_head_ops;
> 
> #ifdef CONFIG_NVME_MULTIPATH
> @@ -551,8 +551,7 @@ static inline void nvme_mpath_stop(struct nvme_ctrl *ctrl)
> void nvme_nvm_update_nvm_info(struct nvme_ns *ns);
> int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
> void nvme_nvm_unregister(struct nvme_ns *ns);
> -int nvme_nvm_register_sysfs(struct nvme_ns *ns);
> -void nvme_nvm_unregister_sysfs(struct nvme_ns *ns);
> +void nvme_nvm_register_sysfs(struct nvme_ns *ns);
> int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg);
> #else
> static inline void nvme_nvm_update_nvm_info(struct nvme_ns *ns) {};
> @@ -563,11 +562,7 @@ static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
> }
> 
> static inline void nvme_nvm_unregister(struct nvme_ns *ns) {};
> -static inline int nvme_nvm_register_sysfs(struct nvme_ns *ns)
> -{
> -	return 0;
> -}
> -static inline void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) {};
> +static inline void nvme_nvm_register_sysfs(struct nvme_ns *ns) {};
> static inline int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd,
> 							unsigned long arg)
> {
> --
> 2.12.3
> 
> 

The lightnvm part you added to V2 looks good. Thanks!


Reviewed-by: Javier González <javier@cnexlabs.com>


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-08-14  9:03     ` Javier González
  0 siblings, 0 replies; 61+ messages in thread
From: Javier González @ 2018-08-14  9:03 UTC (permalink / raw)


> 
> On 14 Aug 2018,@09.33, Hannes Reinecke <hare@suse.de> wrote:
> 
> We should be registering the ns_id attribute as default sysfs
> attribute groups, otherwise we have a race condition between
> the uevent and the attributes appearing in sysfs.
> 
> Signed-off-by: Hannes Reinecke <hare at suse.com>
> Reviewed-by: Christoph Hellwig <hch at lst.de>
> ---
> drivers/nvme/host/core.c      | 21 +++++++++------------
> drivers/nvme/host/lightnvm.c  | 27 ++++-----------------------
> drivers/nvme/host/multipath.c | 15 ++++-----------
> drivers/nvme/host/nvme.h      | 11 +++--------
> 4 files changed, 20 insertions(+), 54 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 0e824e8c8fd7..8e26d98e9a8f 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -2734,6 +2734,12 @@ const struct attribute_group nvme_ns_id_attr_group = {
> 	.is_visible	= nvme_ns_id_attrs_are_visible,
> };
> 
> +const struct attribute_group *nvme_ns_id_attr_groups[] = {
> +	&nvme_ns_id_attr_group,
> +	NULL, /* Will be filled in by lightnvm if present */
> +	NULL,
> +};
> +
> #define nvme_show_str_function(field)						\
> static ssize_t  field##_show(struct device *dev,				\
> 			    struct device_attribute *attr, char *buf)		\
> @@ -3099,14 +3105,9 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
> 
> 	nvme_get_ctrl(ctrl);
> 
> -	device_add_disk(ctrl->device, ns->disk, NULL);
> -	if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvme_ns_id_attr_group))
> -		pr_warn("%s: failed to create sysfs group for identification\n",
> -			ns->disk->disk_name);
> -	if (ns->ndev && nvme_nvm_register_sysfs(ns))
> -		pr_warn("%s: failed to register lightnvm sysfs group for identification\n",
> -			ns->disk->disk_name);
> +	if (ns->ndev)
> +		nvme_nvm_register_sysfs(ns);
> +	device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
> 
> 	nvme_mpath_add_disk(ns, id);
> 	nvme_fault_inject_init(ns);
> @@ -3132,10 +3133,6 @@ static void nvme_ns_remove(struct nvme_ns *ns)
> 
> 	nvme_fault_inject_fini(ns);
> 	if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvme_ns_id_attr_group);
> -		if (ns->ndev)
> -			nvme_nvm_unregister_sysfs(ns);
> 		del_gendisk(ns->disk);
> 		blk_cleanup_queue(ns->queue);
> 		if (blk_get_integrity(ns->disk))
> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
> index 6fe5923c95d4..7bf2f9da6293 100644
> --- a/drivers/nvme/host/lightnvm.c
> +++ b/drivers/nvme/host/lightnvm.c
> @@ -1270,39 +1270,20 @@ static const struct attribute_group nvm_dev_attr_group_20 = {
> 	.attrs		= nvm_dev_attrs_20,
> };
> 
> -int nvme_nvm_register_sysfs(struct nvme_ns *ns)
> +void nvme_nvm_register_sysfs(struct nvme_ns *ns)
> {
> 	struct nvm_dev *ndev = ns->ndev;
> 	struct nvm_geo *geo = &ndev->geo;
> 
> 	if (!ndev)
> -		return -EINVAL;
> -
> -	switch (geo->major_ver_id) {
> -	case 1:
> -		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_12);
> -	case 2:
> -		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_20);
> -	}
> -
> -	return -EINVAL;
> -}
> -
> -void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
> -{
> -	struct nvm_dev *ndev = ns->ndev;
> -	struct nvm_geo *geo = &ndev->geo;
> +		return;
> 
> 	switch (geo->major_ver_id) {
> 	case 1:
> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_12);
> +		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_12;
> 		break;
> 	case 2:
> -		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> -					&nvm_dev_attr_group_20);
> +		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_20;
> 		break;
> 	}
> }
> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> index 477af51d01e8..8e846095c42d 100644
> --- a/drivers/nvme/host/multipath.c
> +++ b/drivers/nvme/host/multipath.c
> @@ -282,13 +282,9 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
> 	if (!head->disk)
> 		return;
> 
> -	if (!(head->disk->flags & GENHD_FL_UP)) {
> -		device_add_disk(&head->subsys->dev, head->disk, NULL);
> -		if (sysfs_create_group(&disk_to_dev(head->disk)->kobj,
> -				&nvme_ns_id_attr_group))
> -			dev_warn(&head->subsys->dev,
> -				 "failed to create id group.\n");
> -	}
> +	if (!(head->disk->flags & GENHD_FL_UP))
> +		device_add_disk(&head->subsys->dev, head->disk,
> +				nvme_ns_id_attr_groups);
> 
> 	kblockd_schedule_work(&ns->head->requeue_work);
> }
> @@ -494,11 +490,8 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
> {
> 	if (!head->disk)
> 		return;
> -	if (head->disk->flags & GENHD_FL_UP) {
> -		sysfs_remove_group(&disk_to_dev(head->disk)->kobj,
> -				   &nvme_ns_id_attr_group);
> +	if (head->disk->flags & GENHD_FL_UP)
> 		del_gendisk(head->disk);
> -	}
> 	blk_set_queue_dying(head->disk->queue);
> 	/* make sure all pending bios are cleaned up */
> 	kblockd_schedule_work(&head->requeue_work);
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index bb4a2003c097..9ba6d67d8e0a 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -459,7 +459,7 @@ int nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl);
> int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp,
> 		void *log, size_t size, u64 offset);
> 
> -extern const struct attribute_group nvme_ns_id_attr_group;
> +extern const struct attribute_group *nvme_ns_id_attr_groups[];
> extern const struct block_device_operations nvme_ns_head_ops;
> 
> #ifdef CONFIG_NVME_MULTIPATH
> @@ -551,8 +551,7 @@ static inline void nvme_mpath_stop(struct nvme_ctrl *ctrl)
> void nvme_nvm_update_nvm_info(struct nvme_ns *ns);
> int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
> void nvme_nvm_unregister(struct nvme_ns *ns);
> -int nvme_nvm_register_sysfs(struct nvme_ns *ns);
> -void nvme_nvm_unregister_sysfs(struct nvme_ns *ns);
> +void nvme_nvm_register_sysfs(struct nvme_ns *ns);
> int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg);
> #else
> static inline void nvme_nvm_update_nvm_info(struct nvme_ns *ns) {};
> @@ -563,11 +562,7 @@ static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
> }
> 
> static inline void nvme_nvm_unregister(struct nvme_ns *ns) {};
> -static inline int nvme_nvm_register_sysfs(struct nvme_ns *ns)
> -{
> -	return 0;
> -}
> -static inline void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) {};
> +static inline void nvme_nvm_register_sysfs(struct nvme_ns *ns) {};
> static inline int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd,
> 							unsigned long arg)
> {
> --
> 2.12.3
> 
> 

The lightnvm part you added to V2 looks good. Thanks!


Reviewed-by: Javier Gonz?lez <javier at cnexlabs.com>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: Message signed with OpenPGP
URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20180814/f73e3159/attachment.sig>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
  2018-08-14  7:33 [PATCHv2 0/5] genhd: register default groups with device_add_disk() Hannes Reinecke
@ 2018-08-14  7:33   ` Hannes Reinecke
  0 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-08-14  7:33 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Sagi Grimberg, Keith Busch, Bart van Assche,
	linux-nvme, linux-block, Linux Kernel Mailinglist,
	Hannes Reinecke, Hannes Reinecke

We should be registering the ns_id attribute as default sysfs
attribute groups, otherwise we have a race condition between
the uevent and the attributes appearing in sysfs.

Signed-off-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 drivers/nvme/host/core.c      | 21 +++++++++------------
 drivers/nvme/host/lightnvm.c  | 27 ++++-----------------------
 drivers/nvme/host/multipath.c | 15 ++++-----------
 drivers/nvme/host/nvme.h      | 11 +++--------
 4 files changed, 20 insertions(+), 54 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 0e824e8c8fd7..8e26d98e9a8f 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2734,6 +2734,12 @@ const struct attribute_group nvme_ns_id_attr_group = {
 	.is_visible	= nvme_ns_id_attrs_are_visible,
 };
 
+const struct attribute_group *nvme_ns_id_attr_groups[] = {
+	&nvme_ns_id_attr_group,
+	NULL, /* Will be filled in by lightnvm if present */
+	NULL,
+};
+
 #define nvme_show_str_function(field)						\
 static ssize_t  field##_show(struct device *dev,				\
 			    struct device_attribute *attr, char *buf)		\
@@ -3099,14 +3105,9 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 
 	nvme_get_ctrl(ctrl);
 
-	device_add_disk(ctrl->device, ns->disk, NULL);
-	if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvme_ns_id_attr_group))
-		pr_warn("%s: failed to create sysfs group for identification\n",
-			ns->disk->disk_name);
-	if (ns->ndev && nvme_nvm_register_sysfs(ns))
-		pr_warn("%s: failed to register lightnvm sysfs group for identification\n",
-			ns->disk->disk_name);
+	if (ns->ndev)
+		nvme_nvm_register_sysfs(ns);
+	device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
 
 	nvme_mpath_add_disk(ns, id);
 	nvme_fault_inject_init(ns);
@@ -3132,10 +3133,6 @@ static void nvme_ns_remove(struct nvme_ns *ns)
 
 	nvme_fault_inject_fini(ns);
 	if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
-		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
-					&nvme_ns_id_attr_group);
-		if (ns->ndev)
-			nvme_nvm_unregister_sysfs(ns);
 		del_gendisk(ns->disk);
 		blk_cleanup_queue(ns->queue);
 		if (blk_get_integrity(ns->disk))
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 6fe5923c95d4..7bf2f9da6293 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -1270,39 +1270,20 @@ static const struct attribute_group nvm_dev_attr_group_20 = {
 	.attrs		= nvm_dev_attrs_20,
 };
 
-int nvme_nvm_register_sysfs(struct nvme_ns *ns)
+void nvme_nvm_register_sysfs(struct nvme_ns *ns)
 {
 	struct nvm_dev *ndev = ns->ndev;
 	struct nvm_geo *geo = &ndev->geo;
 
 	if (!ndev)
-		return -EINVAL;
-
-	switch (geo->major_ver_id) {
-	case 1:
-		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_12);
-	case 2:
-		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_20);
-	}
-
-	return -EINVAL;
-}
-
-void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
-{
-	struct nvm_dev *ndev = ns->ndev;
-	struct nvm_geo *geo = &ndev->geo;
+		return;
 
 	switch (geo->major_ver_id) {
 	case 1:
-		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_12);
+		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_12;
 		break;
 	case 2:
-		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_20);
+		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_20;
 		break;
 	}
 }
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 477af51d01e8..8e846095c42d 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -282,13 +282,9 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
 	if (!head->disk)
 		return;
 
-	if (!(head->disk->flags & GENHD_FL_UP)) {
-		device_add_disk(&head->subsys->dev, head->disk, NULL);
-		if (sysfs_create_group(&disk_to_dev(head->disk)->kobj,
-				&nvme_ns_id_attr_group))
-			dev_warn(&head->subsys->dev,
-				 "failed to create id group.\n");
-	}
+	if (!(head->disk->flags & GENHD_FL_UP))
+		device_add_disk(&head->subsys->dev, head->disk,
+				nvme_ns_id_attr_groups);
 
 	kblockd_schedule_work(&ns->head->requeue_work);
 }
@@ -494,11 +490,8 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
 {
 	if (!head->disk)
 		return;
-	if (head->disk->flags & GENHD_FL_UP) {
-		sysfs_remove_group(&disk_to_dev(head->disk)->kobj,
-				   &nvme_ns_id_attr_group);
+	if (head->disk->flags & GENHD_FL_UP)
 		del_gendisk(head->disk);
-	}
 	blk_set_queue_dying(head->disk->queue);
 	/* make sure all pending bios are cleaned up */
 	kblockd_schedule_work(&head->requeue_work);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index bb4a2003c097..9ba6d67d8e0a 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -459,7 +459,7 @@ int nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl);
 int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp,
 		void *log, size_t size, u64 offset);
 
-extern const struct attribute_group nvme_ns_id_attr_group;
+extern const struct attribute_group *nvme_ns_id_attr_groups[];
 extern const struct block_device_operations nvme_ns_head_ops;
 
 #ifdef CONFIG_NVME_MULTIPATH
@@ -551,8 +551,7 @@ static inline void nvme_mpath_stop(struct nvme_ctrl *ctrl)
 void nvme_nvm_update_nvm_info(struct nvme_ns *ns);
 int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
 void nvme_nvm_unregister(struct nvme_ns *ns);
-int nvme_nvm_register_sysfs(struct nvme_ns *ns);
-void nvme_nvm_unregister_sysfs(struct nvme_ns *ns);
+void nvme_nvm_register_sysfs(struct nvme_ns *ns);
 int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg);
 #else
 static inline void nvme_nvm_update_nvm_info(struct nvme_ns *ns) {};
@@ -563,11 +562,7 @@ static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
 }
 
 static inline void nvme_nvm_unregister(struct nvme_ns *ns) {};
-static inline int nvme_nvm_register_sysfs(struct nvme_ns *ns)
-{
-	return 0;
-}
-static inline void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) {};
+static inline void nvme_nvm_register_sysfs(struct nvme_ns *ns) {};
 static inline int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd,
 							unsigned long arg)
 {
-- 
2.12.3

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups
@ 2018-08-14  7:33   ` Hannes Reinecke
  0 siblings, 0 replies; 61+ messages in thread
From: Hannes Reinecke @ 2018-08-14  7:33 UTC (permalink / raw)


We should be registering the ns_id attribute as default sysfs
attribute groups, otherwise we have a race condition between
the uevent and the attributes appearing in sysfs.

Signed-off-by: Hannes Reinecke <hare at suse.com>
Reviewed-by: Christoph Hellwig <hch at lst.de>
---
 drivers/nvme/host/core.c      | 21 +++++++++------------
 drivers/nvme/host/lightnvm.c  | 27 ++++-----------------------
 drivers/nvme/host/multipath.c | 15 ++++-----------
 drivers/nvme/host/nvme.h      | 11 +++--------
 4 files changed, 20 insertions(+), 54 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 0e824e8c8fd7..8e26d98e9a8f 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2734,6 +2734,12 @@ const struct attribute_group nvme_ns_id_attr_group = {
 	.is_visible	= nvme_ns_id_attrs_are_visible,
 };
 
+const struct attribute_group *nvme_ns_id_attr_groups[] = {
+	&nvme_ns_id_attr_group,
+	NULL, /* Will be filled in by lightnvm if present */
+	NULL,
+};
+
 #define nvme_show_str_function(field)						\
 static ssize_t  field##_show(struct device *dev,				\
 			    struct device_attribute *attr, char *buf)		\
@@ -3099,14 +3105,9 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 
 	nvme_get_ctrl(ctrl);
 
-	device_add_disk(ctrl->device, ns->disk, NULL);
-	if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvme_ns_id_attr_group))
-		pr_warn("%s: failed to create sysfs group for identification\n",
-			ns->disk->disk_name);
-	if (ns->ndev && nvme_nvm_register_sysfs(ns))
-		pr_warn("%s: failed to register lightnvm sysfs group for identification\n",
-			ns->disk->disk_name);
+	if (ns->ndev)
+		nvme_nvm_register_sysfs(ns);
+	device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
 
 	nvme_mpath_add_disk(ns, id);
 	nvme_fault_inject_init(ns);
@@ -3132,10 +3133,6 @@ static void nvme_ns_remove(struct nvme_ns *ns)
 
 	nvme_fault_inject_fini(ns);
 	if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
-		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
-					&nvme_ns_id_attr_group);
-		if (ns->ndev)
-			nvme_nvm_unregister_sysfs(ns);
 		del_gendisk(ns->disk);
 		blk_cleanup_queue(ns->queue);
 		if (blk_get_integrity(ns->disk))
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 6fe5923c95d4..7bf2f9da6293 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -1270,39 +1270,20 @@ static const struct attribute_group nvm_dev_attr_group_20 = {
 	.attrs		= nvm_dev_attrs_20,
 };
 
-int nvme_nvm_register_sysfs(struct nvme_ns *ns)
+void nvme_nvm_register_sysfs(struct nvme_ns *ns)
 {
 	struct nvm_dev *ndev = ns->ndev;
 	struct nvm_geo *geo = &ndev->geo;
 
 	if (!ndev)
-		return -EINVAL;
-
-	switch (geo->major_ver_id) {
-	case 1:
-		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_12);
-	case 2:
-		return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_20);
-	}
-
-	return -EINVAL;
-}
-
-void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
-{
-	struct nvm_dev *ndev = ns->ndev;
-	struct nvm_geo *geo = &ndev->geo;
+		return;
 
 	switch (geo->major_ver_id) {
 	case 1:
-		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_12);
+		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_12;
 		break;
 	case 2:
-		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
-					&nvm_dev_attr_group_20);
+		nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_20;
 		break;
 	}
 }
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 477af51d01e8..8e846095c42d 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -282,13 +282,9 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
 	if (!head->disk)
 		return;
 
-	if (!(head->disk->flags & GENHD_FL_UP)) {
-		device_add_disk(&head->subsys->dev, head->disk, NULL);
-		if (sysfs_create_group(&disk_to_dev(head->disk)->kobj,
-				&nvme_ns_id_attr_group))
-			dev_warn(&head->subsys->dev,
-				 "failed to create id group.\n");
-	}
+	if (!(head->disk->flags & GENHD_FL_UP))
+		device_add_disk(&head->subsys->dev, head->disk,
+				nvme_ns_id_attr_groups);
 
 	kblockd_schedule_work(&ns->head->requeue_work);
 }
@@ -494,11 +490,8 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
 {
 	if (!head->disk)
 		return;
-	if (head->disk->flags & GENHD_FL_UP) {
-		sysfs_remove_group(&disk_to_dev(head->disk)->kobj,
-				   &nvme_ns_id_attr_group);
+	if (head->disk->flags & GENHD_FL_UP)
 		del_gendisk(head->disk);
-	}
 	blk_set_queue_dying(head->disk->queue);
 	/* make sure all pending bios are cleaned up */
 	kblockd_schedule_work(&head->requeue_work);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index bb4a2003c097..9ba6d67d8e0a 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -459,7 +459,7 @@ int nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl);
 int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp,
 		void *log, size_t size, u64 offset);
 
-extern const struct attribute_group nvme_ns_id_attr_group;
+extern const struct attribute_group *nvme_ns_id_attr_groups[];
 extern const struct block_device_operations nvme_ns_head_ops;
 
 #ifdef CONFIG_NVME_MULTIPATH
@@ -551,8 +551,7 @@ static inline void nvme_mpath_stop(struct nvme_ctrl *ctrl)
 void nvme_nvm_update_nvm_info(struct nvme_ns *ns);
 int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
 void nvme_nvm_unregister(struct nvme_ns *ns);
-int nvme_nvm_register_sysfs(struct nvme_ns *ns);
-void nvme_nvm_unregister_sysfs(struct nvme_ns *ns);
+void nvme_nvm_register_sysfs(struct nvme_ns *ns);
 int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg);
 #else
 static inline void nvme_nvm_update_nvm_info(struct nvme_ns *ns) {};
@@ -563,11 +562,7 @@ static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
 }
 
 static inline void nvme_nvm_unregister(struct nvme_ns *ns) {};
-static inline int nvme_nvm_register_sysfs(struct nvme_ns *ns)
-{
-	return 0;
-}
-static inline void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) {};
+static inline void nvme_nvm_register_sysfs(struct nvme_ns *ns) {};
 static inline int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd,
 							unsigned long arg)
 {
-- 
2.12.3

^ permalink raw reply related	[flat|nested] 61+ messages in thread

end of thread, other threads:[~2018-09-28 15:17 UTC | newest]

Thread overview: 61+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-05  7:00 [PATCHv3 0/5] genhd: register default groups with device_add_disk() Hannes Reinecke
2018-09-05  7:00 ` Hannes Reinecke
2018-09-05  7:00 ` [PATCH 1/5] block: genhd: add 'groups' argument to device_add_disk Hannes Reinecke
2018-09-05  7:00   ` Hannes Reinecke
2018-09-05 15:24   ` Bart Van Assche
2018-09-05 15:24     ` Bart Van Assche
2018-09-05  7:00 ` [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups Hannes Reinecke
2018-09-05  7:00   ` Hannes Reinecke
2018-09-05 13:18   ` Christoph Hellwig
2018-09-05 13:18     ` Christoph Hellwig
2018-09-05 13:32     ` Hannes Reinecke
2018-09-05 13:32       ` Hannes Reinecke
2018-09-05 13:45       ` Christoph Hellwig
2018-09-05 13:45         ` Christoph Hellwig
2018-09-06  9:56         ` Hannes Reinecke
2018-09-06  9:56           ` Hannes Reinecke
2018-09-05  7:00 ` [PATCH 3/5] aoe: use device_add_disk_with_groups() Hannes Reinecke
2018-09-05  7:00   ` Hannes Reinecke
2018-09-05  7:00 ` [PATCH 4/5] zram: register default groups with device_add_disk() Hannes Reinecke
2018-09-05  7:00   ` Hannes Reinecke
2018-09-05  7:00 ` [PATCH 5/5] virtio-blk: modernize sysfs attribute creation Hannes Reinecke
2018-09-05  7:00   ` Hannes Reinecke
2018-09-21  5:48 ` [PATCHv3 0/5] genhd: register default groups with device_add_disk() Christoph Hellwig
2018-09-21  5:48   ` Christoph Hellwig
2018-09-27 19:02   ` Bart Van Assche
2018-09-27 19:02     ` Bart Van Assche
  -- strict thread matches above, loose matches on Subject: below --
2018-09-28  6:17 [PATCHv4 " Hannes Reinecke
2018-09-28  6:17 ` [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups Hannes Reinecke
2018-09-28  6:17   ` Hannes Reinecke
2018-09-28 14:15   ` Keith Busch
2018-09-28 14:15     ` Keith Busch
2018-09-28 15:17   ` Christoph Hellwig
2018-09-28 15:17     ` Christoph Hellwig
2018-08-14  7:33 [PATCHv2 0/5] genhd: register default groups with device_add_disk() Hannes Reinecke
2018-08-14  7:33 ` [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups Hannes Reinecke
2018-08-14  7:33   ` Hannes Reinecke
2018-08-14  9:03   ` Javier González
2018-08-14  9:03     ` Javier González
2018-08-14  9:59   ` Matias Bjørling
2018-08-14  9:59     ` Matias Bjørling
2018-08-14 15:20   ` Bart Van Assche
2018-08-14 15:20     ` Bart Van Assche
2018-08-14 15:20     ` Bart Van Assche
2018-08-14 15:39     ` Hannes Reinecke
2018-08-14 15:39       ` Hannes Reinecke
2018-08-14 15:44       ` Bart Van Assche
2018-08-14 15:44         ` Bart Van Assche
2018-08-14 15:44         ` Bart Van Assche
2018-08-17  7:00         ` hch
2018-08-17  7:00           ` hch
2018-08-17  7:53           ` Hannes Reinecke
2018-08-17  7:53             ` Hannes Reinecke
2018-08-17 20:04           ` Bart Van Assche
2018-08-17 20:04             ` Bart Van Assche
2018-08-17 20:04             ` Bart Van Assche
2018-08-17 22:47             ` Sagi Grimberg
2018-08-17 22:47               ` Sagi Grimberg
2018-08-20  6:34             ` Hannes Reinecke
2018-08-20  6:34               ` Hannes Reinecke
2018-08-14 21:53   ` kbuild test robot
2018-08-14 21:53     ` kbuild test robot
2018-08-17  6:55   ` Christoph Hellwig
2018-08-17  6:55     ` Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.