All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 0/5] Support for Open-Channel SSDs
@ 2015-08-07 14:29 ` Matias Bjørling
  0 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-08-07 14:29 UTC (permalink / raw)
  To: hch, axboe, linux-fsdevel, linux-kernel, linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

These patches implement support for Open-Channel SSDs.

Applies against axboe's linux-block/for-4.3/drivers and can be found
in the lkml_v7 branch at https://github.com/OpenChannelSSD/linux

Any feedback is greatly appreciated.

Changes since v6:
 - Multipage support (Javier Gonzalez)
 - General code cleanups
 - Fixed memleak on register failure

Changes since v5:
Feedback from Christoph Hellwig.
 - Created new null_nvm from null_blk to register itself as a lightnvm
   device.
 - Changed the interface of register/unregister to only take disk_name.
   The gendisk alloc in nvme is kept. Most instantiations will
   involve the device gendisk, therefore wait with refactoring to a
   later time.
 - Renamed global parameters in core.c and rrpc.c

Changes since v4:
 - Remove gendisk->nvm dependency
 - Remove device driver rq private field dependency.
 - Update submission and completion. The flow is now
     Target -> Block Manager -> Device Driver, replacing callbacks in
     device driver.
 - Abstracted out the block manager into its own module. Other block
   managers can now be implemented. For example to support fully
   host-based SSDs.
 - No longer exposes the device driver gendisk to user-space.
 - Management is moved into /sys/modules/lnvm/parameters/configure_debug

Changes since v3:

 - Remove dependency on REQ_NVM_GC
 - Refactor nvme integration to use nvme_submit_sync_cmd for
   internal commands.
 - Fix race condition bug on multiple threads on RRPC target.
 - Rename sysfs entry under the block device from nvm to lightnvm.
   The configuration is found in /sys/block/*/lightnvm/

Changes since v2:

 Feedback from Paul Bolle:
 - Fix license to GPLv2, documentation, compilation.
 Feedback from Keith Busch:
 - nvme: Move lightnvm out and into nvme-lightnvm.c.
 - nvme: Set controller css on lightnvm command set.
 - nvme: Remove OACS.
 Feedback from Christoph Hellwig:
 - lightnvm: Move out of block layer into /drivers/lightnvm/core.c
 - lightnvm: refactor request->phys_sector into device drivers.
 - lightnvm: refactor prep/unprep into device drivers.
 - lightnvm: move nvm_dev from request_queue to gendisk.

 New
 - Bad block table support (From Javier).
 - Update maintainers file.

Changes since v1:

 - Splitted LightNVM into two parts. A get/put interface for flash
   blocks and the respective targets that implement flash translation
   layer logic.
 - Updated the patches according to the LightNVM specification changes.
 - Added interface to add/remove targets for a block device.

Thanks to Jens Axboe, Christoph Hellwig, Keith Busch, Paul Bolle,
Javier Gonzalez and Jesper Madsen for discussions and contributions.

Matias Bjørling (5):
  lightnvm: Support for Open-Channel SSDs
  lightnvm: Hybrid Open-Channel SSD RRPC target
  lightnvm: Hybrid Open-Channel SSD block manager
  null_nvm: Lightnvm test driver
  nvme: LightNVM support

 MAINTAINERS                   |    8 +
 drivers/Kconfig               |    2 +
 drivers/Makefile              |    5 +
 drivers/block/Makefile        |    2 +-
 drivers/block/nvme-core.c     |   23 +-
 drivers/block/nvme-lightnvm.c |  568 ++++++++++++++++++
 drivers/lightnvm/Kconfig      |   36 ++
 drivers/lightnvm/Makefile     |    8 +
 drivers/lightnvm/bm_hb.c      |  366 ++++++++++++
 drivers/lightnvm/bm_hb.h      |   46 ++
 drivers/lightnvm/core.c       |  591 +++++++++++++++++++
 drivers/lightnvm/null_nvm.c   |  481 +++++++++++++++
 drivers/lightnvm/rrpc.c       | 1296 +++++++++++++++++++++++++++++++++++++++++
 drivers/lightnvm/rrpc.h       |  236 ++++++++
 include/linux/lightnvm.h      |  334 +++++++++++
 include/linux/nvme.h          |    6 +
 include/uapi/linux/nvme.h     |    3 +
 17 files changed, 4007 insertions(+), 4 deletions(-)
 create mode 100644 drivers/block/nvme-lightnvm.c
 create mode 100644 drivers/lightnvm/Kconfig
 create mode 100644 drivers/lightnvm/Makefile
 create mode 100644 drivers/lightnvm/bm_hb.c
 create mode 100644 drivers/lightnvm/bm_hb.h
 create mode 100644 drivers/lightnvm/core.c
 create mode 100644 drivers/lightnvm/null_nvm.c
 create mode 100644 drivers/lightnvm/rrpc.c
 create mode 100644 drivers/lightnvm/rrpc.h
 create mode 100644 include/linux/lightnvm.h

-- 
2.1.4


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH v7 0/5] Support for Open-Channel SSDs
@ 2015-08-07 14:29 ` Matias Bjørling
  0 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-08-07 14:29 UTC (permalink / raw)


These patches implement support for Open-Channel SSDs.

Applies against axboe's linux-block/for-4.3/drivers and can be found
in the lkml_v7 branch at https://github.com/OpenChannelSSD/linux

Any feedback is greatly appreciated.

Changes since v6:
 - Multipage support (Javier Gonzalez)
 - General code cleanups
 - Fixed memleak on register failure

Changes since v5:
Feedback from Christoph Hellwig.
 - Created new null_nvm from null_blk to register itself as a lightnvm
   device.
 - Changed the interface of register/unregister to only take disk_name.
   The gendisk alloc in nvme is kept. Most instantiations will
   involve the device gendisk, therefore wait with refactoring to a
   later time.
 - Renamed global parameters in core.c and rrpc.c

Changes since v4:
 - Remove gendisk->nvm dependency
 - Remove device driver rq private field dependency.
 - Update submission and completion. The flow is now
     Target -> Block Manager -> Device Driver, replacing callbacks in
     device driver.
 - Abstracted out the block manager into its own module. Other block
   managers can now be implemented. For example to support fully
   host-based SSDs.
 - No longer exposes the device driver gendisk to user-space.
 - Management is moved into /sys/modules/lnvm/parameters/configure_debug

Changes since v3:

 - Remove dependency on REQ_NVM_GC
 - Refactor nvme integration to use nvme_submit_sync_cmd for
   internal commands.
 - Fix race condition bug on multiple threads on RRPC target.
 - Rename sysfs entry under the block device from nvm to lightnvm.
   The configuration is found in /sys/block/*/lightnvm/

Changes since v2:

 Feedback from Paul Bolle:
 - Fix license to GPLv2, documentation, compilation.
 Feedback from Keith Busch:
 - nvme: Move lightnvm out and into nvme-lightnvm.c.
 - nvme: Set controller css on lightnvm command set.
 - nvme: Remove OACS.
 Feedback from Christoph Hellwig:
 - lightnvm: Move out of block layer into /drivers/lightnvm/core.c
 - lightnvm: refactor request->phys_sector into device drivers.
 - lightnvm: refactor prep/unprep into device drivers.
 - lightnvm: move nvm_dev from request_queue to gendisk.

 New
 - Bad block table support (From Javier).
 - Update maintainers file.

Changes since v1:

 - Splitted LightNVM into two parts. A get/put interface for flash
   blocks and the respective targets that implement flash translation
   layer logic.
 - Updated the patches according to the LightNVM specification changes.
 - Added interface to add/remove targets for a block device.

Thanks to Jens Axboe, Christoph Hellwig, Keith Busch, Paul Bolle,
Javier Gonzalez and Jesper Madsen for discussions and contributions.

Matias Bj?rling (5):
  lightnvm: Support for Open-Channel SSDs
  lightnvm: Hybrid Open-Channel SSD RRPC target
  lightnvm: Hybrid Open-Channel SSD block manager
  null_nvm: Lightnvm test driver
  nvme: LightNVM support

 MAINTAINERS                   |    8 +
 drivers/Kconfig               |    2 +
 drivers/Makefile              |    5 +
 drivers/block/Makefile        |    2 +-
 drivers/block/nvme-core.c     |   23 +-
 drivers/block/nvme-lightnvm.c |  568 ++++++++++++++++++
 drivers/lightnvm/Kconfig      |   36 ++
 drivers/lightnvm/Makefile     |    8 +
 drivers/lightnvm/bm_hb.c      |  366 ++++++++++++
 drivers/lightnvm/bm_hb.h      |   46 ++
 drivers/lightnvm/core.c       |  591 +++++++++++++++++++
 drivers/lightnvm/null_nvm.c   |  481 +++++++++++++++
 drivers/lightnvm/rrpc.c       | 1296 +++++++++++++++++++++++++++++++++++++++++
 drivers/lightnvm/rrpc.h       |  236 ++++++++
 include/linux/lightnvm.h      |  334 +++++++++++
 include/linux/nvme.h          |    6 +
 include/uapi/linux/nvme.h     |    3 +
 17 files changed, 4007 insertions(+), 4 deletions(-)
 create mode 100644 drivers/block/nvme-lightnvm.c
 create mode 100644 drivers/lightnvm/Kconfig
 create mode 100644 drivers/lightnvm/Makefile
 create mode 100644 drivers/lightnvm/bm_hb.c
 create mode 100644 drivers/lightnvm/bm_hb.h
 create mode 100644 drivers/lightnvm/core.c
 create mode 100644 drivers/lightnvm/null_nvm.c
 create mode 100644 drivers/lightnvm/rrpc.c
 create mode 100644 drivers/lightnvm/rrpc.h
 create mode 100644 include/linux/lightnvm.h

-- 
2.1.4

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH v7 1/5] lightnvm: Support for Open-Channel SSDs
  2015-08-07 14:29 ` Matias Bjørling
  (?)
@ 2015-08-07 14:29   ` Matias Bjørling
  -1 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-08-07 14:29 UTC (permalink / raw)
  To: hch, axboe, linux-fsdevel, linux-kernel, linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.

LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.

The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.

Contributions in this patch from:

  Javier Gonzalez <jg@lightnvm.io>
  Jesper Madsen <jmad@itu.dk>

Signed-off-by: Matias Bjørling <mb@lightnvm.io>
---
 MAINTAINERS               |   8 +
 drivers/Kconfig           |   2 +
 drivers/Makefile          |   5 +
 drivers/lightnvm/Kconfig  |  16 ++
 drivers/lightnvm/Makefile |   5 +
 drivers/lightnvm/core.c   | 590 ++++++++++++++++++++++++++++++++++++++++++++++
 include/linux/lightnvm.h  | 335 ++++++++++++++++++++++++++
 7 files changed, 961 insertions(+)
 create mode 100644 drivers/lightnvm/Kconfig
 create mode 100644 drivers/lightnvm/Makefile
 create mode 100644 drivers/lightnvm/core.c
 create mode 100644 include/linux/lightnvm.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 2d3d55c..d149104 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6162,6 +6162,14 @@ S:	Supported
 F:	drivers/nvdimm/pmem.c
 F:	include/linux/pmem.h
 
+LIGHTNVM PLATFORM SUPPORT
+M:	Matias Bjorling <mb@lightnvm.io>
+W:	http://github/OpenChannelSSD
+S:	Maintained
+F:	drivers/lightnvm/
+F:	include/linux/lightnvm.h
+F:	include/uapi/linux/lightnvm.h
+
 LINUX FOR IBM pSERIES (RS/6000)
 M:	Paul Mackerras <paulus@au.ibm.com>
 W:	http://www.ibm.com/linux/ltc/projects/ppc
diff --git a/drivers/Kconfig b/drivers/Kconfig
index 6e973b8..3992902 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -42,6 +42,8 @@ source "drivers/net/Kconfig"
 
 source "drivers/isdn/Kconfig"
 
+source "drivers/lightnvm/Kconfig"
+
 # input before char - char/joystick depends on it. As does USB.
 
 source "drivers/input/Kconfig"
diff --git a/drivers/Makefile b/drivers/Makefile
index b64b49f..75978ab 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -63,6 +63,10 @@ obj-$(CONFIG_FB_I810)           += video/fbdev/i810/
 obj-$(CONFIG_FB_INTEL)          += video/fbdev/intelfb/
 
 obj-$(CONFIG_PARPORT)		+= parport/
+
+# lightnvm/ comes before block to initialize bm before usage
+obj-$(CONFIG_NVM)		+= lightnvm/
+
 obj-y				+= base/ block/ misc/ mfd/ nfc/
 obj-$(CONFIG_LIBNVDIMM)		+= nvdimm/
 obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf/
@@ -165,3 +169,4 @@ obj-$(CONFIG_RAS)		+= ras/
 obj-$(CONFIG_THUNDERBOLT)	+= thunderbolt/
 obj-$(CONFIG_CORESIGHT)		+= hwtracing/coresight/
 obj-$(CONFIG_ANDROID)		+= android/
+
diff --git a/drivers/lightnvm/Kconfig b/drivers/lightnvm/Kconfig
new file mode 100644
index 0000000..1f8412c
--- /dev/null
+++ b/drivers/lightnvm/Kconfig
@@ -0,0 +1,16 @@
+#
+# Open-Channel SSD NVM configuration
+#
+
+menuconfig NVM
+	bool "Open-Channel SSD target support"
+	depends on BLOCK
+	help
+	  Say Y here to get to enable Open-channel SSDs.
+
+	  Open-Channel SSDs implement a set of extension to SSDs, that
+	  exposes direct access to the underlying non-volatile memory.
+
+	  If you say N, all options in this submenu will be skipped and disabled
+	  only do this if you know what you are doing.
+
diff --git a/drivers/lightnvm/Makefile b/drivers/lightnvm/Makefile
new file mode 100644
index 0000000..38185e9
--- /dev/null
+++ b/drivers/lightnvm/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for Open-Channel SSDs.
+#
+
+obj-$(CONFIG_NVM)		:= core.o
diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
new file mode 100644
index 0000000..6499922
--- /dev/null
+++ b/drivers/lightnvm/core.c
@@ -0,0 +1,590 @@
+/*
+ * Copyright (C) 2015 IT University of Copenhagen
+ * Initial release: Matias Bjorling <mabj@itu.dk>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; see the file COPYING.  If not, write to
+ * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139,
+ * USA.
+ *
+ */
+
+#include <linux/blkdev.h>
+#include <linux/blk-mq.h>
+#include <linux/list.h>
+#include <linux/types.h>
+#include <linux/sem.h>
+#include <linux/bitmap.h>
+#include <linux/module.h>
+
+#include <linux/lightnvm.h>
+
+static LIST_HEAD(nvm_targets);
+static LIST_HEAD(nvm_bms);
+static LIST_HEAD(nvm_devices);
+static DECLARE_RWSEM(nvm_lock);
+
+struct nvm_tgt_type *nvm_find_target_type(const char *name)
+{
+	struct nvm_tgt_type *tt;
+
+	list_for_each_entry(tt, &nvm_targets, list)
+		if (!strcmp(name, tt->name))
+			return tt;
+
+	return NULL;
+}
+
+int nvm_register_target(struct nvm_tgt_type *tt)
+{
+	int ret = 0;
+
+	down_write(&nvm_lock);
+	if (nvm_find_target_type(tt->name))
+		ret = -EEXIST;
+	else
+		list_add(&tt->list, &nvm_targets);
+	up_write(&nvm_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL(nvm_register_target);
+
+void nvm_unregister_target(struct nvm_tgt_type *tt)
+{
+	if (!tt)
+		return;
+
+	down_write(&nvm_lock);
+	list_del(&tt->list);
+	up_write(&nvm_lock);
+}
+EXPORT_SYMBOL(nvm_unregister_target);
+
+void *nvm_alloc_ppalist(struct nvm_dev *dev, gfp_t mem_flags,
+							dma_addr_t *dma_handler)
+{
+	return dev->ops->alloc_ppalist(dev->q, dev->ppalist_pool, mem_flags,
+								dma_handler);
+}
+EXPORT_SYMBOL(nvm_alloc_ppalist);
+
+void nvm_free_ppalist(struct nvm_dev *dev, void *ppa_list,
+							dma_addr_t dma_handler)
+{
+	dev->ops->free_ppalist(dev->ppalist_pool, ppa_list, dma_handler);
+}
+EXPORT_SYMBOL(nvm_free_ppalist);
+
+struct nvm_bm_type *nvm_find_bm_type(const char *name)
+{
+	struct nvm_bm_type *bt;
+
+	list_for_each_entry(bt, &nvm_bms, list)
+		if (!strcmp(name, bt->name))
+			return bt;
+
+	return NULL;
+}
+
+int nvm_register_bm(struct nvm_bm_type *bt)
+{
+	int ret = 0;
+
+	down_write(&nvm_lock);
+	if (nvm_find_bm_type(bt->name))
+		ret = -EEXIST;
+	else
+		list_add(&bt->list, &nvm_bms);
+	up_write(&nvm_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL(nvm_register_bm);
+
+void nvm_unregister_bm(struct nvm_bm_type *bt)
+{
+	if (!bt)
+		return;
+
+	down_write(&nvm_lock);
+	list_del(&bt->list);
+	up_write(&nvm_lock);
+}
+EXPORT_SYMBOL(nvm_unregister_bm);
+
+struct nvm_dev *nvm_find_nvm_dev(const char *name)
+{
+	struct nvm_dev *dev;
+
+	list_for_each_entry(dev, &nvm_devices, devices)
+		if (!strcmp(name, dev->name))
+			return dev;
+
+	return NULL;
+}
+
+struct nvm_block *nvm_get_blk(struct nvm_dev *dev, struct nvm_lun *lun,
+							unsigned long flags)
+{
+	return dev->bm->get_blk(dev, lun, flags);
+}
+EXPORT_SYMBOL(nvm_get_blk);
+
+/* Assumes that all valid pages have already been moved on release to bm */
+void nvm_put_blk(struct nvm_dev *dev, struct nvm_block *blk)
+{
+	return dev->bm->put_blk(dev, blk);
+}
+EXPORT_SYMBOL(nvm_put_blk);
+
+int nvm_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd)
+{
+	return dev->ops->submit_io(dev->q, rqd);
+}
+EXPORT_SYMBOL(nvm_submit_io);
+
+/* Send erase command to device */
+int nvm_erase_blk(struct nvm_dev *dev, struct nvm_block *blk)
+{
+	return dev->bm->erase_blk(dev, blk);
+}
+EXPORT_SYMBOL(nvm_erase_blk);
+
+static void nvm_core_free(struct nvm_dev *dev)
+{
+	kfree(dev->identity.chnls);
+	kfree(dev);
+}
+
+static int nvm_core_init(struct nvm_dev *dev)
+{
+	dev->nr_luns = dev->identity.nchannels;
+	dev->sector_size = EXPOSED_PAGE_SIZE;
+	INIT_LIST_HEAD(&dev->online_targets);
+
+	return 0;
+}
+
+static void nvm_free(struct nvm_dev *dev)
+{
+	if (!dev)
+		return;
+
+	if (dev->bm)
+		dev->bm->unregister_bm(dev);
+
+	nvm_core_free(dev);
+}
+
+int nvm_validate_features(struct nvm_dev *dev)
+{
+	struct nvm_get_features gf;
+	int ret;
+
+	ret = dev->ops->get_features(dev->q, &gf);
+	if (ret)
+		return ret;
+
+	dev->features = gf;
+
+	return 0;
+}
+
+int nvm_validate_responsibility(struct nvm_dev *dev)
+{
+	if (!dev->ops->set_responsibility)
+		return 0;
+
+	return dev->ops->set_responsibility(dev->q, 0);
+}
+
+int nvm_init(struct nvm_dev *dev)
+{
+	struct nvm_bm_type *bt;
+	int ret = 0;
+
+	if (!dev->q || !dev->ops)
+		return -EINVAL;
+
+	if (dev->ops->identify(dev->q, &dev->identity)) {
+		pr_err("nvm: device could not be identified\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	pr_debug("nvm dev: ver %u type %u chnls %u\n",
+			dev->identity.ver_id,
+			dev->identity.nvm_type,
+			dev->identity.nchannels);
+
+	ret = nvm_validate_features(dev);
+	if (ret) {
+		pr_err("nvm: disk features are not supported.");
+		goto err;
+	}
+
+	ret = nvm_validate_responsibility(dev);
+	if (ret) {
+		pr_err("nvm: disk responsibilities are not supported.");
+		goto err;
+	}
+
+	ret = nvm_core_init(dev);
+	if (ret) {
+		pr_err("nvm: could not initialize core structures.\n");
+		goto err;
+	}
+
+	if (!dev->nr_luns) {
+		pr_err("nvm: device did not expose any luns.\n");
+		goto err;
+	}
+
+	/* register with device with a supported BM */
+	list_for_each_entry(bt, &nvm_bms, list) {
+		ret = bt->register_bm(dev);
+		if (ret < 0)
+			goto err; /* initialization failed */
+		if (ret > 0) {
+			dev->bm = bt;
+			break; /* successfully initialized */
+		}
+	}
+
+	if (!ret) {
+		pr_info("nvm: no compatible bm was found.\n");
+		return 0;
+	}
+
+	pr_info("nvm: registered %s with luns: %u blocks: %lu sector size: %d\n",
+		dev->name, dev->nr_luns, dev->total_blocks, dev->sector_size);
+
+	return 0;
+err:
+	nvm_free(dev);
+	pr_err("nvm: failed to initialize nvm\n");
+	return ret;
+}
+
+void nvm_exit(struct nvm_dev *dev)
+{
+	if (dev->ppalist_pool)
+		dev->ops->destroy_ppa_pool(dev->ppalist_pool);
+	nvm_free(dev);
+
+	pr_info("nvm: successfully unloaded\n");
+}
+
+static const struct block_device_operations nvm_fops = {
+	.owner		= THIS_MODULE,
+};
+
+static int nvm_create_target(struct nvm_dev *dev, char *ttname, char *tname,
+						int lun_begin, int lun_end)
+{
+	struct request_queue *tqueue;
+	struct gendisk *tdisk;
+	struct nvm_tgt_type *tt;
+	struct nvm_target *t;
+	void *targetdata;
+
+	tt = nvm_find_target_type(ttname);
+	if (!tt) {
+		pr_err("nvm: target type %s not found\n", ttname);
+		return -EINVAL;
+	}
+
+	down_write(&nvm_lock);
+	list_for_each_entry(t, &dev->online_targets, list) {
+		if (!strcmp(tname, t->disk->disk_name)) {
+			pr_err("nvm: target name already exists.\n");
+			up_write(&nvm_lock);
+			return -EINVAL;
+		}
+	}
+	up_write(&nvm_lock);
+
+	t = kmalloc(sizeof(struct nvm_target), GFP_KERNEL);
+	if (!t)
+		return -ENOMEM;
+
+	tqueue = blk_alloc_queue_node(GFP_KERNEL, dev->q->node);
+	if (!tqueue)
+		goto err_t;
+	blk_queue_make_request(tqueue, tt->make_rq);
+
+	tdisk = alloc_disk(0);
+	if (!tdisk)
+		goto err_queue;
+
+	sprintf(tdisk->disk_name, "%s", tname);
+	tdisk->flags = GENHD_FL_EXT_DEVT;
+	tdisk->major = 0;
+	tdisk->first_minor = 0;
+	tdisk->fops = &nvm_fops;
+	tdisk->queue = tqueue;
+
+	targetdata = tt->init(dev, tdisk, lun_begin, lun_end);
+	if (IS_ERR(targetdata))
+		goto err_init;
+
+	tdisk->private_data = targetdata;
+	tqueue->queuedata = targetdata;
+
+	blk_queue_max_hw_sectors(tqueue, 8 * dev->ops->max_phys_sect);
+
+	set_capacity(tdisk, tt->capacity(targetdata));
+	add_disk(tdisk);
+
+	t->type = tt;
+	t->disk = tdisk;
+
+	down_write(&nvm_lock);
+	list_add_tail(&t->list, &dev->online_targets);
+	up_write(&nvm_lock);
+
+	return 0;
+err_init:
+	put_disk(tdisk);
+err_queue:
+	blk_cleanup_queue(tqueue);
+err_t:
+	kfree(t);
+	return -ENOMEM;
+}
+
+static void nvm_remove_target(struct nvm_target *t)
+{
+	struct nvm_tgt_type *tt = t->type;
+	struct gendisk *tdisk = t->disk;
+	struct request_queue *q = tdisk->queue;
+
+	lockdep_assert_held(&nvm_lock);
+
+	del_gendisk(tdisk);
+	if (tt->exit)
+		tt->exit(tdisk->private_data);
+
+	blk_cleanup_queue(q);
+
+	put_disk(tdisk);
+
+	list_del(&t->list);
+	kfree(t);
+}
+
+static int nvm_configure_show(const char *val)
+{
+	struct nvm_dev *dev;
+	char opcode, devname[DISK_NAME_LEN];
+	int ret;
+
+	ret = sscanf(val, "%c %s", &opcode, devname);
+	if (ret != 2) {
+		pr_err("nvm: invalid command. Use \"opcode devicename\".\n");
+		return -EINVAL;
+	}
+
+	dev = nvm_find_nvm_dev(devname);
+	if (!dev) {
+		pr_err("nvm: device not found\n");
+		return -EINVAL;
+	}
+
+	if (!dev->bm)
+		return 0;
+
+	dev->bm->free_blocks_print(dev);
+
+	return 0;
+}
+
+static int nvm_configure_del(const char *val)
+{
+	struct nvm_target *t = NULL;
+	struct nvm_dev *dev;
+	char opcode, tname[255];
+	int ret;
+
+	ret = sscanf(val, "%c %s", &opcode, tname);
+	if (ret != 2) {
+		pr_err("nvm: invalid command. Use \"d targetname\".\n");
+		return -EINVAL;
+	}
+
+	down_write(&nvm_lock);
+	list_for_each_entry(dev, &nvm_devices, devices)
+		list_for_each_entry(t, &dev->online_targets, list) {
+			if (!strcmp(tname, t->disk->disk_name)) {
+				nvm_remove_target(t);
+				ret = 0;
+				break;
+			}
+		}
+	up_write(&nvm_lock);
+
+	if (ret) {
+		pr_err("nvm: target \"%s\" doesn't exist.\n", tname);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int nvm_configure_add(const char *val)
+{
+	struct nvm_dev *dev;
+	char opcode, devname[DISK_NAME_LEN], tgtengine[255], tname[255];
+	int lun_begin, lun_end, ret;
+
+	ret = sscanf(val, "%c %s %s %s %u:%u", &opcode, devname, tgtengine,
+						tname, &lun_begin, &lun_end);
+	if (ret != 6) {
+		pr_err("nvm: invalid command. Use \"opcode device name tgtengine lun_begin:lun_end\".\n");
+		return -EINVAL;
+	}
+
+	dev = nvm_find_nvm_dev(devname);
+	if (!dev) {
+		pr_err("nvm: device not found\n");
+		return -EINVAL;
+	}
+
+	if (lun_begin > lun_end || lun_end > dev->nr_luns) {
+		pr_err("nvm: lun out of bound (%u:%u > %u)\n",
+					lun_begin, lun_end, dev->nr_luns);
+		return -EINVAL;
+	}
+
+	return nvm_create_target(dev, tname, tgtengine, lun_begin, lun_end);
+}
+
+/* Exposes administrative interface through /sys/module/lnvm/configure_by_str */
+static int nvm_configure_by_str_event(const char *val,
+					const struct kernel_param *kp)
+{
+	char opcode;
+	int ret;
+
+	ret = sscanf(val, "%c", &opcode);
+	if (ret != 1) {
+		pr_err("nvm: configure must be in the format of \"opcode ...\"\n");
+		return -EINVAL;
+	}
+
+	switch (opcode) {
+	case 'a':
+		return nvm_configure_add(val);
+	case 'd':
+		return nvm_configure_del(val);
+	case 's':
+		return nvm_configure_show(val);
+	default:
+		pr_err("nvm: invalid opcode.\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int nvm_configure_get(char *buf, const struct kernel_param *kp)
+{
+	int sz = 0;
+	char *buf_start = buf;
+	struct nvm_dev *dev;
+
+	buf += sprintf(buf, "available devices:\n");
+	down_write(&nvm_lock);
+	list_for_each_entry(dev, &nvm_devices, devices) {
+		if (sz > 4095 - DISK_NAME_LEN)
+			break;
+		buf += sprintf(buf, " %s\n", dev->name);
+	}
+	up_write(&nvm_lock);
+
+	return buf - buf_start - 1;
+}
+
+static const struct kernel_param_ops nvm_configure_by_str_event_param_ops = {
+	.set	= nvm_configure_by_str_event,
+	.get	= nvm_configure_get,
+};
+
+#undef MODULE_PARAM_PREFIX
+#define MODULE_PARAM_PREFIX	"lnvm."
+
+module_param_cb(configure_debug, &nvm_configure_by_str_event_param_ops, NULL,
+									0644);
+
+int nvm_register(struct request_queue *q, char *disk_name,
+							struct nvm_dev_ops *ops)
+{
+	struct nvm_dev *dev;
+	int ret;
+
+	if (!ops->identify || !ops->get_features)
+		return -EINVAL;
+
+	dev = kzalloc(sizeof(struct nvm_dev), GFP_KERNEL);
+	if (!dev)
+		return -ENOMEM;
+
+	dev->q = q;
+	dev->ops = ops;
+	strncpy(dev->name, disk_name, DISK_NAME_LEN);
+
+	ret = nvm_init(dev);
+	if (ret)
+		goto err_init;
+
+	down_write(&nvm_lock);
+	list_add(&dev->devices, &nvm_devices);
+	up_write(&nvm_lock);
+
+	if (dev->ops->max_phys_sect > 256) {
+		pr_info("nvm: maximum number of sectors supported in target is 255. max_phys_sect set to 255\n");
+		dev->ops->max_phys_sect = 255;
+	}
+
+	if (dev->ops->max_phys_sect > 1) {
+		dev->ppalist_pool = dev->ops->create_ppa_pool(dev->q);
+		if (!dev->ppalist_pool) {
+			pr_err("nvm: could not create ppa pool\n");
+			return -ENOMEM;
+		}
+	}
+
+	return 0;
+err_init:
+	kfree(dev);
+	return ret;
+}
+EXPORT_SYMBOL(nvm_register);
+
+void nvm_unregister(char *disk_name)
+{
+	struct nvm_dev *dev = nvm_find_nvm_dev(disk_name);
+
+	if (!dev) {
+		pr_err("nvm: could not find device %s on unregister\n",
+								disk_name);
+		return;
+	}
+
+	nvm_exit(dev);
+
+	down_write(&nvm_lock);
+	list_del(&dev->devices);
+	up_write(&nvm_lock);
+}
+EXPORT_SYMBOL(nvm_unregister);
diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
new file mode 100644
index 0000000..9654354
--- /dev/null
+++ b/include/linux/lightnvm.h
@@ -0,0 +1,335 @@
+#ifndef NVM_H
+#define NVM_H
+
+enum {
+	NVM_IO_OK = 0,
+	NVM_IO_REQUEUE = 1,
+	NVM_IO_DONE = 2,
+	NVM_IO_ERR = 3,
+
+	NVM_IOTYPE_NONE = 0,
+	NVM_IOTYPE_GC = 1,
+};
+
+#ifdef CONFIG_NVM
+
+#include <linux/blkdev.h>
+#include <linux/types.h>
+#include <linux/file.h>
+#include <linux/dmapool.h>
+
+enum {
+	/* HW Responsibilities */
+	NVM_RSP_L2P	= 1 << 0,
+	NVM_RSP_GC	= 1 << 1,
+	NVM_RSP_ECC	= 1 << 2,
+
+	/* Physical NVM Type */
+	NVM_NVMT_BLK	= 0,
+	NVM_NVMT_BYTE	= 1,
+
+	/* Internal IO Scheduling algorithm */
+	NVM_IOSCHED_CHANNEL	= 0,
+	NVM_IOSCHED_CHIP	= 1,
+
+	/* Status codes */
+	NVM_SUCCESS		= 0,
+	NVM_RSP_NOT_CHANGEABLE	= 1,
+};
+
+struct nvm_id_chnl {
+	u64	laddr_begin;
+	u64	laddr_end;
+	u32	oob_size;
+	u32	queue_size;
+	u32	gran_read;
+	u32	gran_write;
+	u32	gran_erase;
+	u32	t_r;
+	u32	t_sqr;
+	u32	t_w;
+	u32	t_sqw;
+	u32	t_e;
+	u16	chnl_parallelism;
+	u8	io_sched;
+	u8	res[133];
+};
+
+struct nvm_id {
+	u8	ver_id;
+	u8	nvm_type;
+	u16	nchannels;
+	struct nvm_id_chnl *chnls;
+};
+
+struct nvm_get_features {
+	u64	rsp;
+	u64	ext;
+};
+
+struct nvm_target {
+	struct list_head list;
+	struct nvm_tgt_type *type;
+	struct gendisk *disk;
+};
+
+struct nvm_tgt_instance {
+	struct nvm_tgt_type *tt;
+};
+
+struct nvm_rq {
+	struct nvm_tgt_instance *ins;
+	struct bio *bio;
+	union {
+		sector_t ppa;
+		sector_t *ppa_list;
+	};
+	/*DMA handler to be used by underlying devices supporting DMA*/
+	dma_addr_t dma_ppa_list;
+	uint8_t npages;
+};
+
+static inline struct nvm_rq *nvm_rq_from_pdu(void *pdu)
+{
+	return pdu - sizeof(struct nvm_rq);
+}
+
+static inline void *nvm_rq_to_pdu(struct nvm_rq *rqdata)
+{
+	return rqdata + 1;
+}
+
+struct nvm_block;
+
+typedef int (nvm_l2p_update_fn)(u64, u64, u64 *, void *);
+typedef int (nvm_bb_update_fn)(u32, void *, unsigned int, void *);
+typedef int (nvm_id_fn)(struct request_queue *, struct nvm_id *);
+typedef int (nvm_get_features_fn)(struct request_queue *,
+						struct nvm_get_features *);
+typedef int (nvm_set_rsp_fn)(struct request_queue *, u64);
+typedef int (nvm_get_l2p_tbl_fn)(struct request_queue *, u64, u64,
+				nvm_l2p_update_fn *, void *);
+typedef int (nvm_op_bb_tbl_fn)(struct request_queue *, int, unsigned int,
+				nvm_bb_update_fn *, void *);
+typedef int (nvm_submit_io_fn)(struct request_queue *, struct nvm_rq *);
+typedef int (nvm_erase_blk_fn)(struct request_queue *, sector_t);
+typedef void *(nvm_create_ppapool_fn)(struct request_queue *);
+typedef void (nvm_destroy_ppapool_fn)(void *);
+typedef void *(nvm_alloc_ppalist_fn)(struct request_queue *, void *, gfp_t,
+								dma_addr_t*);
+typedef void (nvm_free_ppalist_fn)(void *, void*, dma_addr_t);
+
+struct nvm_dev_ops {
+	nvm_id_fn		*identify;
+	nvm_get_features_fn	*get_features;
+	nvm_set_rsp_fn		*set_responsibility;
+	nvm_get_l2p_tbl_fn	*get_l2p_tbl;
+	nvm_op_bb_tbl_fn	*set_bb_tbl;
+	nvm_op_bb_tbl_fn	*get_bb_tbl;
+
+	nvm_submit_io_fn	*submit_io;
+	nvm_erase_blk_fn	*erase_block;
+
+	nvm_create_ppapool_fn	*create_ppa_pool;
+	nvm_destroy_ppapool_fn	*destroy_ppa_pool;
+	nvm_alloc_ppalist_fn	*alloc_ppalist;
+	nvm_free_ppalist_fn	*free_ppalist;
+
+	uint8_t			max_phys_sect;
+};
+
+struct nvm_lun {
+	int id;
+
+	int nr_pages_per_blk;
+	unsigned int nr_blocks;		/* end_block - start_block. */
+	unsigned int nr_free_blocks;	/* Number of unused blocks */
+
+	struct nvm_block *blocks;
+
+	spinlock_t lock;
+};
+
+struct nvm_block {
+	struct list_head list;
+	struct nvm_lun *lun;
+	unsigned long long id;
+
+	void *priv;
+	int type;
+};
+
+struct nvm_dev {
+	struct nvm_dev_ops *ops;
+
+	struct list_head devices;
+	struct list_head online_targets;
+
+	/* Block manager */
+	struct nvm_bm_type *bm;
+	void *bmp;
+
+	/* Target information */
+	int nr_luns;
+
+	/* Calculated/Cached values. These do not reflect the actual usable
+	 * blocks at run-time. */
+	unsigned long total_pages;
+	unsigned long total_blocks;
+	unsigned max_pages_per_blk;
+
+	uint32_t sector_size;
+
+	void *ppalist_pool;
+
+	/* Identity */
+	struct nvm_id identity;
+	struct nvm_get_features features;
+
+	/* Backend device */
+	struct request_queue *q;
+	char name[DISK_NAME_LEN];
+};
+
+typedef void (nvm_tgt_make_rq_fn)(struct request_queue *, struct bio *);
+typedef sector_t (nvm_tgt_capacity_fn)(void *);
+typedef void (nvm_tgt_end_io_fn)(struct nvm_rq *, int);
+typedef void *(nvm_tgt_init_fn)(struct nvm_dev *, struct gendisk *, int, int);
+typedef void (nvm_tgt_exit_fn)(void *);
+
+struct nvm_tgt_type {
+	const char *name;
+	unsigned int version[3];
+
+	/* target entry points */
+	nvm_tgt_make_rq_fn *make_rq;
+	nvm_tgt_capacity_fn *capacity;
+	nvm_tgt_end_io_fn *end_io;
+
+	/* module-specific init/teardown */
+	nvm_tgt_init_fn *init;
+	nvm_tgt_exit_fn *exit;
+
+	/* For internal use */
+	struct list_head list;
+};
+
+extern int nvm_register_target(struct nvm_tgt_type *);
+extern void nvm_unregister_target(struct nvm_tgt_type *);
+
+extern void *nvm_alloc_ppalist(struct nvm_dev *, gfp_t, dma_addr_t *);
+extern void nvm_free_ppalist(struct nvm_dev *, void *, dma_addr_t);
+
+typedef int (nvm_bm_register_fn)(struct nvm_dev *);
+typedef void (nvm_bm_unregister_fn)(struct nvm_dev *);
+typedef struct nvm_block *(nvm_bm_get_blk_fn)(struct nvm_dev *,
+					      struct nvm_lun *, unsigned long);
+typedef void (nvm_bm_put_blk_fn)(struct nvm_dev *, struct nvm_block *);
+typedef int (nvm_bm_open_blk_fn)(struct nvm_dev *, struct nvm_block *);
+typedef int (nvm_bm_close_blk_fn)(struct nvm_dev *, struct nvm_block *);
+typedef void (nvm_bm_flush_blk_fn)(struct nvm_dev *, struct nvm_block *);
+typedef int (nvm_bm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *);
+typedef void (nvm_bm_end_io_fn)(struct nvm_rq *, int);
+typedef int (nvm_bm_erase_blk_fn)(struct nvm_dev *, struct nvm_block *);
+typedef int (nvm_bm_register_prog_err_fn)(struct nvm_dev *,
+	     void (prog_err_fn)(struct nvm_dev *, struct nvm_block *));
+typedef int (nvm_bm_save_state_fn)(struct file *);
+typedef int (nvm_bm_restore_state_fn)(struct file *);
+typedef struct nvm_lun *(nvm_bm_get_luns_fn)(struct nvm_dev *, int, int);
+typedef void (nvm_bm_free_blocks_print_fn)(struct nvm_dev *);
+
+struct nvm_bm_type {
+	const char *name;
+	unsigned int version[3];
+
+	nvm_bm_register_fn *register_bm;
+	nvm_bm_unregister_fn *unregister_bm;
+
+	/* Block administration callbacks */
+	nvm_bm_get_blk_fn *get_blk;
+	nvm_bm_put_blk_fn *put_blk;
+	nvm_bm_open_blk_fn *open_blk;
+	nvm_bm_close_blk_fn *close_blk;
+	nvm_bm_flush_blk_fn *flush_blk;
+
+	nvm_bm_submit_io_fn *submit_io;
+	nvm_bm_end_io_fn *end_io;
+	nvm_bm_erase_blk_fn *erase_blk;
+
+	/* State management for debugging purposes */
+	nvm_bm_save_state_fn *save_state;
+	nvm_bm_restore_state_fn *restore_state;
+
+	/* Configuration management */
+	nvm_bm_get_luns_fn *get_luns;
+
+	/* Statistics */
+	nvm_bm_free_blocks_print_fn *free_blocks_print;
+	struct list_head list;
+};
+
+extern int nvm_register_bm(struct nvm_bm_type *);
+extern void nvm_unregister_bm(struct nvm_bm_type *);
+
+extern struct nvm_block *nvm_get_blk(struct nvm_dev *, struct nvm_lun *,
+								unsigned long);
+extern void nvm_put_blk(struct nvm_dev *, struct nvm_block *);
+extern int nvm_erase_blk(struct nvm_dev *, struct nvm_block *);
+
+extern int nvm_register(struct request_queue *, char *,
+						struct nvm_dev_ops *);
+extern void nvm_unregister(char *);
+
+extern int nvm_submit_io(struct nvm_dev *, struct nvm_rq *);
+
+/* We currently assume that we the lightnvm device is accepting data in 512
+ * bytes chunks. This should be set to the smallest command size available for a
+ * given device.
+ */
+#define NVM_SECTOR (512)
+#define EXPOSED_PAGE_SIZE (4096)
+
+#define NR_PHY_IN_LOG (EXPOSED_PAGE_SIZE / NVM_SECTOR)
+
+#define NVM_MSG_PREFIX "nvm"
+#define ADDR_EMPTY (~0ULL)
+
+static inline unsigned long nvm_get_rq_flags(struct request *rq)
+{
+	return (unsigned long)rq->cmd;
+}
+
+#else /* CONFIG_NVM */
+
+struct nvm_dev_ops;
+struct nvm_dev;
+struct nvm_lun;
+struct nvm_block;
+struct nvm_rq {
+};
+struct nvm_tgt_type;
+struct nvm_tgt_instance;
+
+static inline struct nvm_tgt_type *nvm_find_target_type(const char *c)
+{
+	return NULL;
+}
+static inline int nvm_register(struct request_queue *q, char *disk_name,
+							struct nvm_dev_ops *ops)
+{
+	return -EINVAL;
+}
+static inline void nvm_unregister(char *disk_name) {}
+static inline struct nvm_block *nvm_get_blk(struct nvm_dev *dev,
+				struct nvm_lun *lun, unsigned long flags)
+{
+	return NULL;
+}
+static inline void nvm_put_blk(struct nvm_dev *dev, struct nvm_block *blk) {}
+static inline int nvm_erase_blk(struct nvm_dev *dev, struct nvm_block *blk)
+{
+	return -EINVAL;
+}
+
+#endif /* CONFIG_NVM */
+#endif /* LIGHTNVM.H */
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v7 1/5] lightnvm: Support for Open-Channel SSDs
@ 2015-08-07 14:29   ` Matias Bjørling
  0 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-08-07 14:29 UTC (permalink / raw)
  To: hch, axboe, linux-fsdevel, linux-kernel, linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.

LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.

The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.

Contributions in this patch from:

  Javier Gonzalez <jg@lightnvm.io>
  Jesper Madsen <jmad@itu.dk>

Signed-off-by: Matias Bjørling <mb@lightnvm.io>
---
 MAINTAINERS               |   8 +
 drivers/Kconfig           |   2 +
 drivers/Makefile          |   5 +
 drivers/lightnvm/Kconfig  |  16 ++
 drivers/lightnvm/Makefile |   5 +
 drivers/lightnvm/core.c   | 590 ++++++++++++++++++++++++++++++++++++++++++++++
 include/linux/lightnvm.h  | 335 ++++++++++++++++++++++++++
 7 files changed, 961 insertions(+)
 create mode 100644 drivers/lightnvm/Kconfig
 create mode 100644 drivers/lightnvm/Makefile
 create mode 100644 drivers/lightnvm/core.c
 create mode 100644 include/linux/lightnvm.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 2d3d55c..d149104 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6162,6 +6162,14 @@ S:	Supported
 F:	drivers/nvdimm/pmem.c
 F:	include/linux/pmem.h
 
+LIGHTNVM PLATFORM SUPPORT
+M:	Matias Bjorling <mb@lightnvm.io>
+W:	http://github/OpenChannelSSD
+S:	Maintained
+F:	drivers/lightnvm/
+F:	include/linux/lightnvm.h
+F:	include/uapi/linux/lightnvm.h
+
 LINUX FOR IBM pSERIES (RS/6000)
 M:	Paul Mackerras <paulus@au.ibm.com>
 W:	http://www.ibm.com/linux/ltc/projects/ppc
diff --git a/drivers/Kconfig b/drivers/Kconfig
index 6e973b8..3992902 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -42,6 +42,8 @@ source "drivers/net/Kconfig"
 
 source "drivers/isdn/Kconfig"
 
+source "drivers/lightnvm/Kconfig"
+
 # input before char - char/joystick depends on it. As does USB.
 
 source "drivers/input/Kconfig"
diff --git a/drivers/Makefile b/drivers/Makefile
index b64b49f..75978ab 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -63,6 +63,10 @@ obj-$(CONFIG_FB_I810)           += video/fbdev/i810/
 obj-$(CONFIG_FB_INTEL)          += video/fbdev/intelfb/
 
 obj-$(CONFIG_PARPORT)		+= parport/
+
+# lightnvm/ comes before block to initialize bm before usage
+obj-$(CONFIG_NVM)		+= lightnvm/
+
 obj-y				+= base/ block/ misc/ mfd/ nfc/
 obj-$(CONFIG_LIBNVDIMM)		+= nvdimm/
 obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf/
@@ -165,3 +169,4 @@ obj-$(CONFIG_RAS)		+= ras/
 obj-$(CONFIG_THUNDERBOLT)	+= thunderbolt/
 obj-$(CONFIG_CORESIGHT)		+= hwtracing/coresight/
 obj-$(CONFIG_ANDROID)		+= android/
+
diff --git a/drivers/lightnvm/Kconfig b/drivers/lightnvm/Kconfig
new file mode 100644
index 0000000..1f8412c
--- /dev/null
+++ b/drivers/lightnvm/Kconfig
@@ -0,0 +1,16 @@
+#
+# Open-Channel SSD NVM configuration
+#
+
+menuconfig NVM
+	bool "Open-Channel SSD target support"
+	depends on BLOCK
+	help
+	  Say Y here to get to enable Open-channel SSDs.
+
+	  Open-Channel SSDs implement a set of extension to SSDs, that
+	  exposes direct access to the underlying non-volatile memory.
+
+	  If you say N, all options in this submenu will be skipped and disabled
+	  only do this if you know what you are doing.
+
diff --git a/drivers/lightnvm/Makefile b/drivers/lightnvm/Makefile
new file mode 100644
index 0000000..38185e9
--- /dev/null
+++ b/drivers/lightnvm/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for Open-Channel SSDs.
+#
+
+obj-$(CONFIG_NVM)		:= core.o
diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
new file mode 100644
index 0000000..6499922
--- /dev/null
+++ b/drivers/lightnvm/core.c
@@ -0,0 +1,590 @@
+/*
+ * Copyright (C) 2015 IT University of Copenhagen
+ * Initial release: Matias Bjorling <mabj@itu.dk>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; see the file COPYING.  If not, write to
+ * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139,
+ * USA.
+ *
+ */
+
+#include <linux/blkdev.h>
+#include <linux/blk-mq.h>
+#include <linux/list.h>
+#include <linux/types.h>
+#include <linux/sem.h>
+#include <linux/bitmap.h>
+#include <linux/module.h>
+
+#include <linux/lightnvm.h>
+
+static LIST_HEAD(nvm_targets);
+static LIST_HEAD(nvm_bms);
+static LIST_HEAD(nvm_devices);
+static DECLARE_RWSEM(nvm_lock);
+
+struct nvm_tgt_type *nvm_find_target_type(const char *name)
+{
+	struct nvm_tgt_type *tt;
+
+	list_for_each_entry(tt, &nvm_targets, list)
+		if (!strcmp(name, tt->name))
+			return tt;
+
+	return NULL;
+}
+
+int nvm_register_target(struct nvm_tgt_type *tt)
+{
+	int ret = 0;
+
+	down_write(&nvm_lock);
+	if (nvm_find_target_type(tt->name))
+		ret = -EEXIST;
+	else
+		list_add(&tt->list, &nvm_targets);
+	up_write(&nvm_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL(nvm_register_target);
+
+void nvm_unregister_target(struct nvm_tgt_type *tt)
+{
+	if (!tt)
+		return;
+
+	down_write(&nvm_lock);
+	list_del(&tt->list);
+	up_write(&nvm_lock);
+}
+EXPORT_SYMBOL(nvm_unregister_target);
+
+void *nvm_alloc_ppalist(struct nvm_dev *dev, gfp_t mem_flags,
+							dma_addr_t *dma_handler)
+{
+	return dev->ops->alloc_ppalist(dev->q, dev->ppalist_pool, mem_flags,
+								dma_handler);
+}
+EXPORT_SYMBOL(nvm_alloc_ppalist);
+
+void nvm_free_ppalist(struct nvm_dev *dev, void *ppa_list,
+							dma_addr_t dma_handler)
+{
+	dev->ops->free_ppalist(dev->ppalist_pool, ppa_list, dma_handler);
+}
+EXPORT_SYMBOL(nvm_free_ppalist);
+
+struct nvm_bm_type *nvm_find_bm_type(const char *name)
+{
+	struct nvm_bm_type *bt;
+
+	list_for_each_entry(bt, &nvm_bms, list)
+		if (!strcmp(name, bt->name))
+			return bt;
+
+	return NULL;
+}
+
+int nvm_register_bm(struct nvm_bm_type *bt)
+{
+	int ret = 0;
+
+	down_write(&nvm_lock);
+	if (nvm_find_bm_type(bt->name))
+		ret = -EEXIST;
+	else
+		list_add(&bt->list, &nvm_bms);
+	up_write(&nvm_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL(nvm_register_bm);
+
+void nvm_unregister_bm(struct nvm_bm_type *bt)
+{
+	if (!bt)
+		return;
+
+	down_write(&nvm_lock);
+	list_del(&bt->list);
+	up_write(&nvm_lock);
+}
+EXPORT_SYMBOL(nvm_unregister_bm);
+
+struct nvm_dev *nvm_find_nvm_dev(const char *name)
+{
+	struct nvm_dev *dev;
+
+	list_for_each_entry(dev, &nvm_devices, devices)
+		if (!strcmp(name, dev->name))
+			return dev;
+
+	return NULL;
+}
+
+struct nvm_block *nvm_get_blk(struct nvm_dev *dev, struct nvm_lun *lun,
+							unsigned long flags)
+{
+	return dev->bm->get_blk(dev, lun, flags);
+}
+EXPORT_SYMBOL(nvm_get_blk);
+
+/* Assumes that all valid pages have already been moved on release to bm */
+void nvm_put_blk(struct nvm_dev *dev, struct nvm_block *blk)
+{
+	return dev->bm->put_blk(dev, blk);
+}
+EXPORT_SYMBOL(nvm_put_blk);
+
+int nvm_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd)
+{
+	return dev->ops->submit_io(dev->q, rqd);
+}
+EXPORT_SYMBOL(nvm_submit_io);
+
+/* Send erase command to device */
+int nvm_erase_blk(struct nvm_dev *dev, struct nvm_block *blk)
+{
+	return dev->bm->erase_blk(dev, blk);
+}
+EXPORT_SYMBOL(nvm_erase_blk);
+
+static void nvm_core_free(struct nvm_dev *dev)
+{
+	kfree(dev->identity.chnls);
+	kfree(dev);
+}
+
+static int nvm_core_init(struct nvm_dev *dev)
+{
+	dev->nr_luns = dev->identity.nchannels;
+	dev->sector_size = EXPOSED_PAGE_SIZE;
+	INIT_LIST_HEAD(&dev->online_targets);
+
+	return 0;
+}
+
+static void nvm_free(struct nvm_dev *dev)
+{
+	if (!dev)
+		return;
+
+	if (dev->bm)
+		dev->bm->unregister_bm(dev);
+
+	nvm_core_free(dev);
+}
+
+int nvm_validate_features(struct nvm_dev *dev)
+{
+	struct nvm_get_features gf;
+	int ret;
+
+	ret = dev->ops->get_features(dev->q, &gf);
+	if (ret)
+		return ret;
+
+	dev->features = gf;
+
+	return 0;
+}
+
+int nvm_validate_responsibility(struct nvm_dev *dev)
+{
+	if (!dev->ops->set_responsibility)
+		return 0;
+
+	return dev->ops->set_responsibility(dev->q, 0);
+}
+
+int nvm_init(struct nvm_dev *dev)
+{
+	struct nvm_bm_type *bt;
+	int ret = 0;
+
+	if (!dev->q || !dev->ops)
+		return -EINVAL;
+
+	if (dev->ops->identify(dev->q, &dev->identity)) {
+		pr_err("nvm: device could not be identified\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	pr_debug("nvm dev: ver %u type %u chnls %u\n",
+			dev->identity.ver_id,
+			dev->identity.nvm_type,
+			dev->identity.nchannels);
+
+	ret = nvm_validate_features(dev);
+	if (ret) {
+		pr_err("nvm: disk features are not supported.");
+		goto err;
+	}
+
+	ret = nvm_validate_responsibility(dev);
+	if (ret) {
+		pr_err("nvm: disk responsibilities are not supported.");
+		goto err;
+	}
+
+	ret = nvm_core_init(dev);
+	if (ret) {
+		pr_err("nvm: could not initialize core structures.\n");
+		goto err;
+	}
+
+	if (!dev->nr_luns) {
+		pr_err("nvm: device did not expose any luns.\n");
+		goto err;
+	}
+
+	/* register with device with a supported BM */
+	list_for_each_entry(bt, &nvm_bms, list) {
+		ret = bt->register_bm(dev);
+		if (ret < 0)
+			goto err; /* initialization failed */
+		if (ret > 0) {
+			dev->bm = bt;
+			break; /* successfully initialized */
+		}
+	}
+
+	if (!ret) {
+		pr_info("nvm: no compatible bm was found.\n");
+		return 0;
+	}
+
+	pr_info("nvm: registered %s with luns: %u blocks: %lu sector size: %d\n",
+		dev->name, dev->nr_luns, dev->total_blocks, dev->sector_size);
+
+	return 0;
+err:
+	nvm_free(dev);
+	pr_err("nvm: failed to initialize nvm\n");
+	return ret;
+}
+
+void nvm_exit(struct nvm_dev *dev)
+{
+	if (dev->ppalist_pool)
+		dev->ops->destroy_ppa_pool(dev->ppalist_pool);
+	nvm_free(dev);
+
+	pr_info("nvm: successfully unloaded\n");
+}
+
+static const struct block_device_operations nvm_fops = {
+	.owner		= THIS_MODULE,
+};
+
+static int nvm_create_target(struct nvm_dev *dev, char *ttname, char *tname,
+						int lun_begin, int lun_end)
+{
+	struct request_queue *tqueue;
+	struct gendisk *tdisk;
+	struct nvm_tgt_type *tt;
+	struct nvm_target *t;
+	void *targetdata;
+
+	tt = nvm_find_target_type(ttname);
+	if (!tt) {
+		pr_err("nvm: target type %s not found\n", ttname);
+		return -EINVAL;
+	}
+
+	down_write(&nvm_lock);
+	list_for_each_entry(t, &dev->online_targets, list) {
+		if (!strcmp(tname, t->disk->disk_name)) {
+			pr_err("nvm: target name already exists.\n");
+			up_write(&nvm_lock);
+			return -EINVAL;
+		}
+	}
+	up_write(&nvm_lock);
+
+	t = kmalloc(sizeof(struct nvm_target), GFP_KERNEL);
+	if (!t)
+		return -ENOMEM;
+
+	tqueue = blk_alloc_queue_node(GFP_KERNEL, dev->q->node);
+	if (!tqueue)
+		goto err_t;
+	blk_queue_make_request(tqueue, tt->make_rq);
+
+	tdisk = alloc_disk(0);
+	if (!tdisk)
+		goto err_queue;
+
+	sprintf(tdisk->disk_name, "%s", tname);
+	tdisk->flags = GENHD_FL_EXT_DEVT;
+	tdisk->major = 0;
+	tdisk->first_minor = 0;
+	tdisk->fops = &nvm_fops;
+	tdisk->queue = tqueue;
+
+	targetdata = tt->init(dev, tdisk, lun_begin, lun_end);
+	if (IS_ERR(targetdata))
+		goto err_init;
+
+	tdisk->private_data = targetdata;
+	tqueue->queuedata = targetdata;
+
+	blk_queue_max_hw_sectors(tqueue, 8 * dev->ops->max_phys_sect);
+
+	set_capacity(tdisk, tt->capacity(targetdata));
+	add_disk(tdisk);
+
+	t->type = tt;
+	t->disk = tdisk;
+
+	down_write(&nvm_lock);
+	list_add_tail(&t->list, &dev->online_targets);
+	up_write(&nvm_lock);
+
+	return 0;
+err_init:
+	put_disk(tdisk);
+err_queue:
+	blk_cleanup_queue(tqueue);
+err_t:
+	kfree(t);
+	return -ENOMEM;
+}
+
+static void nvm_remove_target(struct nvm_target *t)
+{
+	struct nvm_tgt_type *tt = t->type;
+	struct gendisk *tdisk = t->disk;
+	struct request_queue *q = tdisk->queue;
+
+	lockdep_assert_held(&nvm_lock);
+
+	del_gendisk(tdisk);
+	if (tt->exit)
+		tt->exit(tdisk->private_data);
+
+	blk_cleanup_queue(q);
+
+	put_disk(tdisk);
+
+	list_del(&t->list);
+	kfree(t);
+}
+
+static int nvm_configure_show(const char *val)
+{
+	struct nvm_dev *dev;
+	char opcode, devname[DISK_NAME_LEN];
+	int ret;
+
+	ret = sscanf(val, "%c %s", &opcode, devname);
+	if (ret != 2) {
+		pr_err("nvm: invalid command. Use \"opcode devicename\".\n");
+		return -EINVAL;
+	}
+
+	dev = nvm_find_nvm_dev(devname);
+	if (!dev) {
+		pr_err("nvm: device not found\n");
+		return -EINVAL;
+	}
+
+	if (!dev->bm)
+		return 0;
+
+	dev->bm->free_blocks_print(dev);
+
+	return 0;
+}
+
+static int nvm_configure_del(const char *val)
+{
+	struct nvm_target *t = NULL;
+	struct nvm_dev *dev;
+	char opcode, tname[255];
+	int ret;
+
+	ret = sscanf(val, "%c %s", &opcode, tname);
+	if (ret != 2) {
+		pr_err("nvm: invalid command. Use \"d targetname\".\n");
+		return -EINVAL;
+	}
+
+	down_write(&nvm_lock);
+	list_for_each_entry(dev, &nvm_devices, devices)
+		list_for_each_entry(t, &dev->online_targets, list) {
+			if (!strcmp(tname, t->disk->disk_name)) {
+				nvm_remove_target(t);
+				ret = 0;
+				break;
+			}
+		}
+	up_write(&nvm_lock);
+
+	if (ret) {
+		pr_err("nvm: target \"%s\" doesn't exist.\n", tname);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int nvm_configure_add(const char *val)
+{
+	struct nvm_dev *dev;
+	char opcode, devname[DISK_NAME_LEN], tgtengine[255], tname[255];
+	int lun_begin, lun_end, ret;
+
+	ret = sscanf(val, "%c %s %s %s %u:%u", &opcode, devname, tgtengine,
+						tname, &lun_begin, &lun_end);
+	if (ret != 6) {
+		pr_err("nvm: invalid command. Use \"opcode device name tgtengine lun_begin:lun_end\".\n");
+		return -EINVAL;
+	}
+
+	dev = nvm_find_nvm_dev(devname);
+	if (!dev) {
+		pr_err("nvm: device not found\n");
+		return -EINVAL;
+	}
+
+	if (lun_begin > lun_end || lun_end > dev->nr_luns) {
+		pr_err("nvm: lun out of bound (%u:%u > %u)\n",
+					lun_begin, lun_end, dev->nr_luns);
+		return -EINVAL;
+	}
+
+	return nvm_create_target(dev, tname, tgtengine, lun_begin, lun_end);
+}
+
+/* Exposes administrative interface through /sys/module/lnvm/configure_by_str */
+static int nvm_configure_by_str_event(const char *val,
+					const struct kernel_param *kp)
+{
+	char opcode;
+	int ret;
+
+	ret = sscanf(val, "%c", &opcode);
+	if (ret != 1) {
+		pr_err("nvm: configure must be in the format of \"opcode ...\"\n");
+		return -EINVAL;
+	}
+
+	switch (opcode) {
+	case 'a':
+		return nvm_configure_add(val);
+	case 'd':
+		return nvm_configure_del(val);
+	case 's':
+		return nvm_configure_show(val);
+	default:
+		pr_err("nvm: invalid opcode.\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int nvm_configure_get(char *buf, const struct kernel_param *kp)
+{
+	int sz = 0;
+	char *buf_start = buf;
+	struct nvm_dev *dev;
+
+	buf += sprintf(buf, "available devices:\n");
+	down_write(&nvm_lock);
+	list_for_each_entry(dev, &nvm_devices, devices) {
+		if (sz > 4095 - DISK_NAME_LEN)
+			break;
+		buf += sprintf(buf, " %s\n", dev->name);
+	}
+	up_write(&nvm_lock);
+
+	return buf - buf_start - 1;
+}
+
+static const struct kernel_param_ops nvm_configure_by_str_event_param_ops = {
+	.set	= nvm_configure_by_str_event,
+	.get	= nvm_configure_get,
+};
+
+#undef MODULE_PARAM_PREFIX
+#define MODULE_PARAM_PREFIX	"lnvm."
+
+module_param_cb(configure_debug, &nvm_configure_by_str_event_param_ops, NULL,
+									0644);
+
+int nvm_register(struct request_queue *q, char *disk_name,
+							struct nvm_dev_ops *ops)
+{
+	struct nvm_dev *dev;
+	int ret;
+
+	if (!ops->identify || !ops->get_features)
+		return -EINVAL;
+
+	dev = kzalloc(sizeof(struct nvm_dev), GFP_KERNEL);
+	if (!dev)
+		return -ENOMEM;
+
+	dev->q = q;
+	dev->ops = ops;
+	strncpy(dev->name, disk_name, DISK_NAME_LEN);
+
+	ret = nvm_init(dev);
+	if (ret)
+		goto err_init;
+
+	down_write(&nvm_lock);
+	list_add(&dev->devices, &nvm_devices);
+	up_write(&nvm_lock);
+
+	if (dev->ops->max_phys_sect > 256) {
+		pr_info("nvm: maximum number of sectors supported in target is 255. max_phys_sect set to 255\n");
+		dev->ops->max_phys_sect = 255;
+	}
+
+	if (dev->ops->max_phys_sect > 1) {
+		dev->ppalist_pool = dev->ops->create_ppa_pool(dev->q);
+		if (!dev->ppalist_pool) {
+			pr_err("nvm: could not create ppa pool\n");
+			return -ENOMEM;
+		}
+	}
+
+	return 0;
+err_init:
+	kfree(dev);
+	return ret;
+}
+EXPORT_SYMBOL(nvm_register);
+
+void nvm_unregister(char *disk_name)
+{
+	struct nvm_dev *dev = nvm_find_nvm_dev(disk_name);
+
+	if (!dev) {
+		pr_err("nvm: could not find device %s on unregister\n",
+								disk_name);
+		return;
+	}
+
+	nvm_exit(dev);
+
+	down_write(&nvm_lock);
+	list_del(&dev->devices);
+	up_write(&nvm_lock);
+}
+EXPORT_SYMBOL(nvm_unregister);
diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
new file mode 100644
index 0000000..9654354
--- /dev/null
+++ b/include/linux/lightnvm.h
@@ -0,0 +1,335 @@
+#ifndef NVM_H
+#define NVM_H
+
+enum {
+	NVM_IO_OK = 0,
+	NVM_IO_REQUEUE = 1,
+	NVM_IO_DONE = 2,
+	NVM_IO_ERR = 3,
+
+	NVM_IOTYPE_NONE = 0,
+	NVM_IOTYPE_GC = 1,
+};
+
+#ifdef CONFIG_NVM
+
+#include <linux/blkdev.h>
+#include <linux/types.h>
+#include <linux/file.h>
+#include <linux/dmapool.h>
+
+enum {
+	/* HW Responsibilities */
+	NVM_RSP_L2P	= 1 << 0,
+	NVM_RSP_GC	= 1 << 1,
+	NVM_RSP_ECC	= 1 << 2,
+
+	/* Physical NVM Type */
+	NVM_NVMT_BLK	= 0,
+	NVM_NVMT_BYTE	= 1,
+
+	/* Internal IO Scheduling algorithm */
+	NVM_IOSCHED_CHANNEL	= 0,
+	NVM_IOSCHED_CHIP	= 1,
+
+	/* Status codes */
+	NVM_SUCCESS		= 0,
+	NVM_RSP_NOT_CHANGEABLE	= 1,
+};
+
+struct nvm_id_chnl {
+	u64	laddr_begin;
+	u64	laddr_end;
+	u32	oob_size;
+	u32	queue_size;
+	u32	gran_read;
+	u32	gran_write;
+	u32	gran_erase;
+	u32	t_r;
+	u32	t_sqr;
+	u32	t_w;
+	u32	t_sqw;
+	u32	t_e;
+	u16	chnl_parallelism;
+	u8	io_sched;
+	u8	res[133];
+};
+
+struct nvm_id {
+	u8	ver_id;
+	u8	nvm_type;
+	u16	nchannels;
+	struct nvm_id_chnl *chnls;
+};
+
+struct nvm_get_features {
+	u64	rsp;
+	u64	ext;
+};
+
+struct nvm_target {
+	struct list_head list;
+	struct nvm_tgt_type *type;
+	struct gendisk *disk;
+};
+
+struct nvm_tgt_instance {
+	struct nvm_tgt_type *tt;
+};
+
+struct nvm_rq {
+	struct nvm_tgt_instance *ins;
+	struct bio *bio;
+	union {
+		sector_t ppa;
+		sector_t *ppa_list;
+	};
+	/*DMA handler to be used by underlying devices supporting DMA*/
+	dma_addr_t dma_ppa_list;
+	uint8_t npages;
+};
+
+static inline struct nvm_rq *nvm_rq_from_pdu(void *pdu)
+{
+	return pdu - sizeof(struct nvm_rq);
+}
+
+static inline void *nvm_rq_to_pdu(struct nvm_rq *rqdata)
+{
+	return rqdata + 1;
+}
+
+struct nvm_block;
+
+typedef int (nvm_l2p_update_fn)(u64, u64, u64 *, void *);
+typedef int (nvm_bb_update_fn)(u32, void *, unsigned int, void *);
+typedef int (nvm_id_fn)(struct request_queue *, struct nvm_id *);
+typedef int (nvm_get_features_fn)(struct request_queue *,
+						struct nvm_get_features *);
+typedef int (nvm_set_rsp_fn)(struct request_queue *, u64);
+typedef int (nvm_get_l2p_tbl_fn)(struct request_queue *, u64, u64,
+				nvm_l2p_update_fn *, void *);
+typedef int (nvm_op_bb_tbl_fn)(struct request_queue *, int, unsigned int,
+				nvm_bb_update_fn *, void *);
+typedef int (nvm_submit_io_fn)(struct request_queue *, struct nvm_rq *);
+typedef int (nvm_erase_blk_fn)(struct request_queue *, sector_t);
+typedef void *(nvm_create_ppapool_fn)(struct request_queue *);
+typedef void (nvm_destroy_ppapool_fn)(void *);
+typedef void *(nvm_alloc_ppalist_fn)(struct request_queue *, void *, gfp_t,
+								dma_addr_t*);
+typedef void (nvm_free_ppalist_fn)(void *, void*, dma_addr_t);
+
+struct nvm_dev_ops {
+	nvm_id_fn		*identify;
+	nvm_get_features_fn	*get_features;
+	nvm_set_rsp_fn		*set_responsibility;
+	nvm_get_l2p_tbl_fn	*get_l2p_tbl;
+	nvm_op_bb_tbl_fn	*set_bb_tbl;
+	nvm_op_bb_tbl_fn	*get_bb_tbl;
+
+	nvm_submit_io_fn	*submit_io;
+	nvm_erase_blk_fn	*erase_block;
+
+	nvm_create_ppapool_fn	*create_ppa_pool;
+	nvm_destroy_ppapool_fn	*destroy_ppa_pool;
+	nvm_alloc_ppalist_fn	*alloc_ppalist;
+	nvm_free_ppalist_fn	*free_ppalist;
+
+	uint8_t			max_phys_sect;
+};
+
+struct nvm_lun {
+	int id;
+
+	int nr_pages_per_blk;
+	unsigned int nr_blocks;		/* end_block - start_block. */
+	unsigned int nr_free_blocks;	/* Number of unused blocks */
+
+	struct nvm_block *blocks;
+
+	spinlock_t lock;
+};
+
+struct nvm_block {
+	struct list_head list;
+	struct nvm_lun *lun;
+	unsigned long long id;
+
+	void *priv;
+	int type;
+};
+
+struct nvm_dev {
+	struct nvm_dev_ops *ops;
+
+	struct list_head devices;
+	struct list_head online_targets;
+
+	/* Block manager */
+	struct nvm_bm_type *bm;
+	void *bmp;
+
+	/* Target information */
+	int nr_luns;
+
+	/* Calculated/Cached values. These do not reflect the actual usable
+	 * blocks at run-time. */
+	unsigned long total_pages;
+	unsigned long total_blocks;
+	unsigned max_pages_per_blk;
+
+	uint32_t sector_size;
+
+	void *ppalist_pool;
+
+	/* Identity */
+	struct nvm_id identity;
+	struct nvm_get_features features;
+
+	/* Backend device */
+	struct request_queue *q;
+	char name[DISK_NAME_LEN];
+};
+
+typedef void (nvm_tgt_make_rq_fn)(struct request_queue *, struct bio *);
+typedef sector_t (nvm_tgt_capacity_fn)(void *);
+typedef void (nvm_tgt_end_io_fn)(struct nvm_rq *, int);
+typedef void *(nvm_tgt_init_fn)(struct nvm_dev *, struct gendisk *, int, int);
+typedef void (nvm_tgt_exit_fn)(void *);
+
+struct nvm_tgt_type {
+	const char *name;
+	unsigned int version[3];
+
+	/* target entry points */
+	nvm_tgt_make_rq_fn *make_rq;
+	nvm_tgt_capacity_fn *capacity;
+	nvm_tgt_end_io_fn *end_io;
+
+	/* module-specific init/teardown */
+	nvm_tgt_init_fn *init;
+	nvm_tgt_exit_fn *exit;
+
+	/* For internal use */
+	struct list_head list;
+};
+
+extern int nvm_register_target(struct nvm_tgt_type *);
+extern void nvm_unregister_target(struct nvm_tgt_type *);
+
+extern void *nvm_alloc_ppalist(struct nvm_dev *, gfp_t, dma_addr_t *);
+extern void nvm_free_ppalist(struct nvm_dev *, void *, dma_addr_t);
+
+typedef int (nvm_bm_register_fn)(struct nvm_dev *);
+typedef void (nvm_bm_unregister_fn)(struct nvm_dev *);
+typedef struct nvm_block *(nvm_bm_get_blk_fn)(struct nvm_dev *,
+					      struct nvm_lun *, unsigned long);
+typedef void (nvm_bm_put_blk_fn)(struct nvm_dev *, struct nvm_block *);
+typedef int (nvm_bm_open_blk_fn)(struct nvm_dev *, struct nvm_block *);
+typedef int (nvm_bm_close_blk_fn)(struct nvm_dev *, struct nvm_block *);
+typedef void (nvm_bm_flush_blk_fn)(struct nvm_dev *, struct nvm_block *);
+typedef int (nvm_bm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *);
+typedef void (nvm_bm_end_io_fn)(struct nvm_rq *, int);
+typedef int (nvm_bm_erase_blk_fn)(struct nvm_dev *, struct nvm_block *);
+typedef int (nvm_bm_register_prog_err_fn)(struct nvm_dev *,
+	     void (prog_err_fn)(struct nvm_dev *, struct nvm_block *));
+typedef int (nvm_bm_save_state_fn)(struct file *);
+typedef int (nvm_bm_restore_state_fn)(struct file *);
+typedef struct nvm_lun *(nvm_bm_get_luns_fn)(struct nvm_dev *, int, int);
+typedef void (nvm_bm_free_blocks_print_fn)(struct nvm_dev *);
+
+struct nvm_bm_type {
+	const char *name;
+	unsigned int version[3];
+
+	nvm_bm_register_fn *register_bm;
+	nvm_bm_unregister_fn *unregister_bm;
+
+	/* Block administration callbacks */
+	nvm_bm_get_blk_fn *get_blk;
+	nvm_bm_put_blk_fn *put_blk;
+	nvm_bm_open_blk_fn *open_blk;
+	nvm_bm_close_blk_fn *close_blk;
+	nvm_bm_flush_blk_fn *flush_blk;
+
+	nvm_bm_submit_io_fn *submit_io;
+	nvm_bm_end_io_fn *end_io;
+	nvm_bm_erase_blk_fn *erase_blk;
+
+	/* State management for debugging purposes */
+	nvm_bm_save_state_fn *save_state;
+	nvm_bm_restore_state_fn *restore_state;
+
+	/* Configuration management */
+	nvm_bm_get_luns_fn *get_luns;
+
+	/* Statistics */
+	nvm_bm_free_blocks_print_fn *free_blocks_print;
+	struct list_head list;
+};
+
+extern int nvm_register_bm(struct nvm_bm_type *);
+extern void nvm_unregister_bm(struct nvm_bm_type *);
+
+extern struct nvm_block *nvm_get_blk(struct nvm_dev *, struct nvm_lun *,
+								unsigned long);
+extern void nvm_put_blk(struct nvm_dev *, struct nvm_block *);
+extern int nvm_erase_blk(struct nvm_dev *, struct nvm_block *);
+
+extern int nvm_register(struct request_queue *, char *,
+						struct nvm_dev_ops *);
+extern void nvm_unregister(char *);
+
+extern int nvm_submit_io(struct nvm_dev *, struct nvm_rq *);
+
+/* We currently assume that we the lightnvm device is accepting data in 512
+ * bytes chunks. This should be set to the smallest command size available for a
+ * given device.
+ */
+#define NVM_SECTOR (512)
+#define EXPOSED_PAGE_SIZE (4096)
+
+#define NR_PHY_IN_LOG (EXPOSED_PAGE_SIZE / NVM_SECTOR)
+
+#define NVM_MSG_PREFIX "nvm"
+#define ADDR_EMPTY (~0ULL)
+
+static inline unsigned long nvm_get_rq_flags(struct request *rq)
+{
+	return (unsigned long)rq->cmd;
+}
+
+#else /* CONFIG_NVM */
+
+struct nvm_dev_ops;
+struct nvm_dev;
+struct nvm_lun;
+struct nvm_block;
+struct nvm_rq {
+};
+struct nvm_tgt_type;
+struct nvm_tgt_instance;
+
+static inline struct nvm_tgt_type *nvm_find_target_type(const char *c)
+{
+	return NULL;
+}
+static inline int nvm_register(struct request_queue *q, char *disk_name,
+							struct nvm_dev_ops *ops)
+{
+	return -EINVAL;
+}
+static inline void nvm_unregister(char *disk_name) {}
+static inline struct nvm_block *nvm_get_blk(struct nvm_dev *dev,
+				struct nvm_lun *lun, unsigned long flags)
+{
+	return NULL;
+}
+static inline void nvm_put_blk(struct nvm_dev *dev, struct nvm_block *blk) {}
+static inline int nvm_erase_blk(struct nvm_dev *dev, struct nvm_block *blk)
+{
+	return -EINVAL;
+}
+
+#endif /* CONFIG_NVM */
+#endif /* LIGHTNVM.H */
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v7 1/5] lightnvm: Support for Open-Channel SSDs
@ 2015-08-07 14:29   ` Matias Bjørling
  0 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-08-07 14:29 UTC (permalink / raw)


Open-channel SSDs are devices that share responsibilities with the host
in order to implement and maintain features that typical SSDs keep
strictly in firmware. These include (i) the Flash Translation Layer
(FTL), (ii) bad block management, and (iii) hardware units such as the
flash controller, the interface controller, and large amounts of flash
chips. In this way, Open-channels SSDs exposes direct access to their
physical flash storage, while keeping a subset of the internal features
of SSDs.

LightNVM is a specification that gives support to Open-channel SSDs
LightNVM allows the host to manage data placement, garbage collection,
and parallelism. Device specific responsibilities such as bad block
management, FTL extensions to support atomic IOs, or metadata
persistence are still handled by the device.

The implementation of LightNVM consists of two parts: core and
(multiple) targets. The core implements functionality shared across
targets. This is initialization, teardown and statistics. The targets
implement the interface that exposes physical flash to user-space
applications. Examples of such targets include key-value store,
object-store, as well as traditional block devices, which can be
application-specific.

Contributions in this patch from:

  Javier Gonzalez <jg at lightnvm.io>
  Jesper Madsen <jmad at itu.dk>

Signed-off-by: Matias Bj?rling <mb at lightnvm.io>
---
 MAINTAINERS               |   8 +
 drivers/Kconfig           |   2 +
 drivers/Makefile          |   5 +
 drivers/lightnvm/Kconfig  |  16 ++
 drivers/lightnvm/Makefile |   5 +
 drivers/lightnvm/core.c   | 590 ++++++++++++++++++++++++++++++++++++++++++++++
 include/linux/lightnvm.h  | 335 ++++++++++++++++++++++++++
 7 files changed, 961 insertions(+)
 create mode 100644 drivers/lightnvm/Kconfig
 create mode 100644 drivers/lightnvm/Makefile
 create mode 100644 drivers/lightnvm/core.c
 create mode 100644 include/linux/lightnvm.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 2d3d55c..d149104 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6162,6 +6162,14 @@ S:	Supported
 F:	drivers/nvdimm/pmem.c
 F:	include/linux/pmem.h
 
+LIGHTNVM PLATFORM SUPPORT
+M:	Matias Bjorling <mb at lightnvm.io>
+W:	http://github/OpenChannelSSD
+S:	Maintained
+F:	drivers/lightnvm/
+F:	include/linux/lightnvm.h
+F:	include/uapi/linux/lightnvm.h
+
 LINUX FOR IBM pSERIES (RS/6000)
 M:	Paul Mackerras <paulus at au.ibm.com>
 W:	http://www.ibm.com/linux/ltc/projects/ppc
diff --git a/drivers/Kconfig b/drivers/Kconfig
index 6e973b8..3992902 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -42,6 +42,8 @@ source "drivers/net/Kconfig"
 
 source "drivers/isdn/Kconfig"
 
+source "drivers/lightnvm/Kconfig"
+
 # input before char - char/joystick depends on it. As does USB.
 
 source "drivers/input/Kconfig"
diff --git a/drivers/Makefile b/drivers/Makefile
index b64b49f..75978ab 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -63,6 +63,10 @@ obj-$(CONFIG_FB_I810)           += video/fbdev/i810/
 obj-$(CONFIG_FB_INTEL)          += video/fbdev/intelfb/
 
 obj-$(CONFIG_PARPORT)		+= parport/
+
+# lightnvm/ comes before block to initialize bm before usage
+obj-$(CONFIG_NVM)		+= lightnvm/
+
 obj-y				+= base/ block/ misc/ mfd/ nfc/
 obj-$(CONFIG_LIBNVDIMM)		+= nvdimm/
 obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf/
@@ -165,3 +169,4 @@ obj-$(CONFIG_RAS)		+= ras/
 obj-$(CONFIG_THUNDERBOLT)	+= thunderbolt/
 obj-$(CONFIG_CORESIGHT)		+= hwtracing/coresight/
 obj-$(CONFIG_ANDROID)		+= android/
+
diff --git a/drivers/lightnvm/Kconfig b/drivers/lightnvm/Kconfig
new file mode 100644
index 0000000..1f8412c
--- /dev/null
+++ b/drivers/lightnvm/Kconfig
@@ -0,0 +1,16 @@
+#
+# Open-Channel SSD NVM configuration
+#
+
+menuconfig NVM
+	bool "Open-Channel SSD target support"
+	depends on BLOCK
+	help
+	  Say Y here to get to enable Open-channel SSDs.
+
+	  Open-Channel SSDs implement a set of extension to SSDs, that
+	  exposes direct access to the underlying non-volatile memory.
+
+	  If you say N, all options in this submenu will be skipped and disabled
+	  only do this if you know what you are doing.
+
diff --git a/drivers/lightnvm/Makefile b/drivers/lightnvm/Makefile
new file mode 100644
index 0000000..38185e9
--- /dev/null
+++ b/drivers/lightnvm/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for Open-Channel SSDs.
+#
+
+obj-$(CONFIG_NVM)		:= core.o
diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
new file mode 100644
index 0000000..6499922
--- /dev/null
+++ b/drivers/lightnvm/core.c
@@ -0,0 +1,590 @@
+/*
+ * Copyright (C) 2015 IT University of Copenhagen
+ * Initial release: Matias Bjorling <mabj at itu.dk>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; see the file COPYING.  If not, write to
+ * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139,
+ * USA.
+ *
+ */
+
+#include <linux/blkdev.h>
+#include <linux/blk-mq.h>
+#include <linux/list.h>
+#include <linux/types.h>
+#include <linux/sem.h>
+#include <linux/bitmap.h>
+#include <linux/module.h>
+
+#include <linux/lightnvm.h>
+
+static LIST_HEAD(nvm_targets);
+static LIST_HEAD(nvm_bms);
+static LIST_HEAD(nvm_devices);
+static DECLARE_RWSEM(nvm_lock);
+
+struct nvm_tgt_type *nvm_find_target_type(const char *name)
+{
+	struct nvm_tgt_type *tt;
+
+	list_for_each_entry(tt, &nvm_targets, list)
+		if (!strcmp(name, tt->name))
+			return tt;
+
+	return NULL;
+}
+
+int nvm_register_target(struct nvm_tgt_type *tt)
+{
+	int ret = 0;
+
+	down_write(&nvm_lock);
+	if (nvm_find_target_type(tt->name))
+		ret = -EEXIST;
+	else
+		list_add(&tt->list, &nvm_targets);
+	up_write(&nvm_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL(nvm_register_target);
+
+void nvm_unregister_target(struct nvm_tgt_type *tt)
+{
+	if (!tt)
+		return;
+
+	down_write(&nvm_lock);
+	list_del(&tt->list);
+	up_write(&nvm_lock);
+}
+EXPORT_SYMBOL(nvm_unregister_target);
+
+void *nvm_alloc_ppalist(struct nvm_dev *dev, gfp_t mem_flags,
+							dma_addr_t *dma_handler)
+{
+	return dev->ops->alloc_ppalist(dev->q, dev->ppalist_pool, mem_flags,
+								dma_handler);
+}
+EXPORT_SYMBOL(nvm_alloc_ppalist);
+
+void nvm_free_ppalist(struct nvm_dev *dev, void *ppa_list,
+							dma_addr_t dma_handler)
+{
+	dev->ops->free_ppalist(dev->ppalist_pool, ppa_list, dma_handler);
+}
+EXPORT_SYMBOL(nvm_free_ppalist);
+
+struct nvm_bm_type *nvm_find_bm_type(const char *name)
+{
+	struct nvm_bm_type *bt;
+
+	list_for_each_entry(bt, &nvm_bms, list)
+		if (!strcmp(name, bt->name))
+			return bt;
+
+	return NULL;
+}
+
+int nvm_register_bm(struct nvm_bm_type *bt)
+{
+	int ret = 0;
+
+	down_write(&nvm_lock);
+	if (nvm_find_bm_type(bt->name))
+		ret = -EEXIST;
+	else
+		list_add(&bt->list, &nvm_bms);
+	up_write(&nvm_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL(nvm_register_bm);
+
+void nvm_unregister_bm(struct nvm_bm_type *bt)
+{
+	if (!bt)
+		return;
+
+	down_write(&nvm_lock);
+	list_del(&bt->list);
+	up_write(&nvm_lock);
+}
+EXPORT_SYMBOL(nvm_unregister_bm);
+
+struct nvm_dev *nvm_find_nvm_dev(const char *name)
+{
+	struct nvm_dev *dev;
+
+	list_for_each_entry(dev, &nvm_devices, devices)
+		if (!strcmp(name, dev->name))
+			return dev;
+
+	return NULL;
+}
+
+struct nvm_block *nvm_get_blk(struct nvm_dev *dev, struct nvm_lun *lun,
+							unsigned long flags)
+{
+	return dev->bm->get_blk(dev, lun, flags);
+}
+EXPORT_SYMBOL(nvm_get_blk);
+
+/* Assumes that all valid pages have already been moved on release to bm */
+void nvm_put_blk(struct nvm_dev *dev, struct nvm_block *blk)
+{
+	return dev->bm->put_blk(dev, blk);
+}
+EXPORT_SYMBOL(nvm_put_blk);
+
+int nvm_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd)
+{
+	return dev->ops->submit_io(dev->q, rqd);
+}
+EXPORT_SYMBOL(nvm_submit_io);
+
+/* Send erase command to device */
+int nvm_erase_blk(struct nvm_dev *dev, struct nvm_block *blk)
+{
+	return dev->bm->erase_blk(dev, blk);
+}
+EXPORT_SYMBOL(nvm_erase_blk);
+
+static void nvm_core_free(struct nvm_dev *dev)
+{
+	kfree(dev->identity.chnls);
+	kfree(dev);
+}
+
+static int nvm_core_init(struct nvm_dev *dev)
+{
+	dev->nr_luns = dev->identity.nchannels;
+	dev->sector_size = EXPOSED_PAGE_SIZE;
+	INIT_LIST_HEAD(&dev->online_targets);
+
+	return 0;
+}
+
+static void nvm_free(struct nvm_dev *dev)
+{
+	if (!dev)
+		return;
+
+	if (dev->bm)
+		dev->bm->unregister_bm(dev);
+
+	nvm_core_free(dev);
+}
+
+int nvm_validate_features(struct nvm_dev *dev)
+{
+	struct nvm_get_features gf;
+	int ret;
+
+	ret = dev->ops->get_features(dev->q, &gf);
+	if (ret)
+		return ret;
+
+	dev->features = gf;
+
+	return 0;
+}
+
+int nvm_validate_responsibility(struct nvm_dev *dev)
+{
+	if (!dev->ops->set_responsibility)
+		return 0;
+
+	return dev->ops->set_responsibility(dev->q, 0);
+}
+
+int nvm_init(struct nvm_dev *dev)
+{
+	struct nvm_bm_type *bt;
+	int ret = 0;
+
+	if (!dev->q || !dev->ops)
+		return -EINVAL;
+
+	if (dev->ops->identify(dev->q, &dev->identity)) {
+		pr_err("nvm: device could not be identified\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	pr_debug("nvm dev: ver %u type %u chnls %u\n",
+			dev->identity.ver_id,
+			dev->identity.nvm_type,
+			dev->identity.nchannels);
+
+	ret = nvm_validate_features(dev);
+	if (ret) {
+		pr_err("nvm: disk features are not supported.");
+		goto err;
+	}
+
+	ret = nvm_validate_responsibility(dev);
+	if (ret) {
+		pr_err("nvm: disk responsibilities are not supported.");
+		goto err;
+	}
+
+	ret = nvm_core_init(dev);
+	if (ret) {
+		pr_err("nvm: could not initialize core structures.\n");
+		goto err;
+	}
+
+	if (!dev->nr_luns) {
+		pr_err("nvm: device did not expose any luns.\n");
+		goto err;
+	}
+
+	/* register with device with a supported BM */
+	list_for_each_entry(bt, &nvm_bms, list) {
+		ret = bt->register_bm(dev);
+		if (ret < 0)
+			goto err; /* initialization failed */
+		if (ret > 0) {
+			dev->bm = bt;
+			break; /* successfully initialized */
+		}
+	}
+
+	if (!ret) {
+		pr_info("nvm: no compatible bm was found.\n");
+		return 0;
+	}
+
+	pr_info("nvm: registered %s with luns: %u blocks: %lu sector size: %d\n",
+		dev->name, dev->nr_luns, dev->total_blocks, dev->sector_size);
+
+	return 0;
+err:
+	nvm_free(dev);
+	pr_err("nvm: failed to initialize nvm\n");
+	return ret;
+}
+
+void nvm_exit(struct nvm_dev *dev)
+{
+	if (dev->ppalist_pool)
+		dev->ops->destroy_ppa_pool(dev->ppalist_pool);
+	nvm_free(dev);
+
+	pr_info("nvm: successfully unloaded\n");
+}
+
+static const struct block_device_operations nvm_fops = {
+	.owner		= THIS_MODULE,
+};
+
+static int nvm_create_target(struct nvm_dev *dev, char *ttname, char *tname,
+						int lun_begin, int lun_end)
+{
+	struct request_queue *tqueue;
+	struct gendisk *tdisk;
+	struct nvm_tgt_type *tt;
+	struct nvm_target *t;
+	void *targetdata;
+
+	tt = nvm_find_target_type(ttname);
+	if (!tt) {
+		pr_err("nvm: target type %s not found\n", ttname);
+		return -EINVAL;
+	}
+
+	down_write(&nvm_lock);
+	list_for_each_entry(t, &dev->online_targets, list) {
+		if (!strcmp(tname, t->disk->disk_name)) {
+			pr_err("nvm: target name already exists.\n");
+			up_write(&nvm_lock);
+			return -EINVAL;
+		}
+	}
+	up_write(&nvm_lock);
+
+	t = kmalloc(sizeof(struct nvm_target), GFP_KERNEL);
+	if (!t)
+		return -ENOMEM;
+
+	tqueue = blk_alloc_queue_node(GFP_KERNEL, dev->q->node);
+	if (!tqueue)
+		goto err_t;
+	blk_queue_make_request(tqueue, tt->make_rq);
+
+	tdisk = alloc_disk(0);
+	if (!tdisk)
+		goto err_queue;
+
+	sprintf(tdisk->disk_name, "%s", tname);
+	tdisk->flags = GENHD_FL_EXT_DEVT;
+	tdisk->major = 0;
+	tdisk->first_minor = 0;
+	tdisk->fops = &nvm_fops;
+	tdisk->queue = tqueue;
+
+	targetdata = tt->init(dev, tdisk, lun_begin, lun_end);
+	if (IS_ERR(targetdata))
+		goto err_init;
+
+	tdisk->private_data = targetdata;
+	tqueue->queuedata = targetdata;
+
+	blk_queue_max_hw_sectors(tqueue, 8 * dev->ops->max_phys_sect);
+
+	set_capacity(tdisk, tt->capacity(targetdata));
+	add_disk(tdisk);
+
+	t->type = tt;
+	t->disk = tdisk;
+
+	down_write(&nvm_lock);
+	list_add_tail(&t->list, &dev->online_targets);
+	up_write(&nvm_lock);
+
+	return 0;
+err_init:
+	put_disk(tdisk);
+err_queue:
+	blk_cleanup_queue(tqueue);
+err_t:
+	kfree(t);
+	return -ENOMEM;
+}
+
+static void nvm_remove_target(struct nvm_target *t)
+{
+	struct nvm_tgt_type *tt = t->type;
+	struct gendisk *tdisk = t->disk;
+	struct request_queue *q = tdisk->queue;
+
+	lockdep_assert_held(&nvm_lock);
+
+	del_gendisk(tdisk);
+	if (tt->exit)
+		tt->exit(tdisk->private_data);
+
+	blk_cleanup_queue(q);
+
+	put_disk(tdisk);
+
+	list_del(&t->list);
+	kfree(t);
+}
+
+static int nvm_configure_show(const char *val)
+{
+	struct nvm_dev *dev;
+	char opcode, devname[DISK_NAME_LEN];
+	int ret;
+
+	ret = sscanf(val, "%c %s", &opcode, devname);
+	if (ret != 2) {
+		pr_err("nvm: invalid command. Use \"opcode devicename\".\n");
+		return -EINVAL;
+	}
+
+	dev = nvm_find_nvm_dev(devname);
+	if (!dev) {
+		pr_err("nvm: device not found\n");
+		return -EINVAL;
+	}
+
+	if (!dev->bm)
+		return 0;
+
+	dev->bm->free_blocks_print(dev);
+
+	return 0;
+}
+
+static int nvm_configure_del(const char *val)
+{
+	struct nvm_target *t = NULL;
+	struct nvm_dev *dev;
+	char opcode, tname[255];
+	int ret;
+
+	ret = sscanf(val, "%c %s", &opcode, tname);
+	if (ret != 2) {
+		pr_err("nvm: invalid command. Use \"d targetname\".\n");
+		return -EINVAL;
+	}
+
+	down_write(&nvm_lock);
+	list_for_each_entry(dev, &nvm_devices, devices)
+		list_for_each_entry(t, &dev->online_targets, list) {
+			if (!strcmp(tname, t->disk->disk_name)) {
+				nvm_remove_target(t);
+				ret = 0;
+				break;
+			}
+		}
+	up_write(&nvm_lock);
+
+	if (ret) {
+		pr_err("nvm: target \"%s\" doesn't exist.\n", tname);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int nvm_configure_add(const char *val)
+{
+	struct nvm_dev *dev;
+	char opcode, devname[DISK_NAME_LEN], tgtengine[255], tname[255];
+	int lun_begin, lun_end, ret;
+
+	ret = sscanf(val, "%c %s %s %s %u:%u", &opcode, devname, tgtengine,
+						tname, &lun_begin, &lun_end);
+	if (ret != 6) {
+		pr_err("nvm: invalid command. Use \"opcode device name tgtengine lun_begin:lun_end\".\n");
+		return -EINVAL;
+	}
+
+	dev = nvm_find_nvm_dev(devname);
+	if (!dev) {
+		pr_err("nvm: device not found\n");
+		return -EINVAL;
+	}
+
+	if (lun_begin > lun_end || lun_end > dev->nr_luns) {
+		pr_err("nvm: lun out of bound (%u:%u > %u)\n",
+					lun_begin, lun_end, dev->nr_luns);
+		return -EINVAL;
+	}
+
+	return nvm_create_target(dev, tname, tgtengine, lun_begin, lun_end);
+}
+
+/* Exposes administrative interface through /sys/module/lnvm/configure_by_str */
+static int nvm_configure_by_str_event(const char *val,
+					const struct kernel_param *kp)
+{
+	char opcode;
+	int ret;
+
+	ret = sscanf(val, "%c", &opcode);
+	if (ret != 1) {
+		pr_err("nvm: configure must be in the format of \"opcode ...\"\n");
+		return -EINVAL;
+	}
+
+	switch (opcode) {
+	case 'a':
+		return nvm_configure_add(val);
+	case 'd':
+		return nvm_configure_del(val);
+	case 's':
+		return nvm_configure_show(val);
+	default:
+		pr_err("nvm: invalid opcode.\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int nvm_configure_get(char *buf, const struct kernel_param *kp)
+{
+	int sz = 0;
+	char *buf_start = buf;
+	struct nvm_dev *dev;
+
+	buf += sprintf(buf, "available devices:\n");
+	down_write(&nvm_lock);
+	list_for_each_entry(dev, &nvm_devices, devices) {
+		if (sz > 4095 - DISK_NAME_LEN)
+			break;
+		buf += sprintf(buf, " %s\n", dev->name);
+	}
+	up_write(&nvm_lock);
+
+	return buf - buf_start - 1;
+}
+
+static const struct kernel_param_ops nvm_configure_by_str_event_param_ops = {
+	.set	= nvm_configure_by_str_event,
+	.get	= nvm_configure_get,
+};
+
+#undef MODULE_PARAM_PREFIX
+#define MODULE_PARAM_PREFIX	"lnvm."
+
+module_param_cb(configure_debug, &nvm_configure_by_str_event_param_ops, NULL,
+									0644);
+
+int nvm_register(struct request_queue *q, char *disk_name,
+							struct nvm_dev_ops *ops)
+{
+	struct nvm_dev *dev;
+	int ret;
+
+	if (!ops->identify || !ops->get_features)
+		return -EINVAL;
+
+	dev = kzalloc(sizeof(struct nvm_dev), GFP_KERNEL);
+	if (!dev)
+		return -ENOMEM;
+
+	dev->q = q;
+	dev->ops = ops;
+	strncpy(dev->name, disk_name, DISK_NAME_LEN);
+
+	ret = nvm_init(dev);
+	if (ret)
+		goto err_init;
+
+	down_write(&nvm_lock);
+	list_add(&dev->devices, &nvm_devices);
+	up_write(&nvm_lock);
+
+	if (dev->ops->max_phys_sect > 256) {
+		pr_info("nvm: maximum number of sectors supported in target is 255. max_phys_sect set to 255\n");
+		dev->ops->max_phys_sect = 255;
+	}
+
+	if (dev->ops->max_phys_sect > 1) {
+		dev->ppalist_pool = dev->ops->create_ppa_pool(dev->q);
+		if (!dev->ppalist_pool) {
+			pr_err("nvm: could not create ppa pool\n");
+			return -ENOMEM;
+		}
+	}
+
+	return 0;
+err_init:
+	kfree(dev);
+	return ret;
+}
+EXPORT_SYMBOL(nvm_register);
+
+void nvm_unregister(char *disk_name)
+{
+	struct nvm_dev *dev = nvm_find_nvm_dev(disk_name);
+
+	if (!dev) {
+		pr_err("nvm: could not find device %s on unregister\n",
+								disk_name);
+		return;
+	}
+
+	nvm_exit(dev);
+
+	down_write(&nvm_lock);
+	list_del(&dev->devices);
+	up_write(&nvm_lock);
+}
+EXPORT_SYMBOL(nvm_unregister);
diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
new file mode 100644
index 0000000..9654354
--- /dev/null
+++ b/include/linux/lightnvm.h
@@ -0,0 +1,335 @@
+#ifndef NVM_H
+#define NVM_H
+
+enum {
+	NVM_IO_OK = 0,
+	NVM_IO_REQUEUE = 1,
+	NVM_IO_DONE = 2,
+	NVM_IO_ERR = 3,
+
+	NVM_IOTYPE_NONE = 0,
+	NVM_IOTYPE_GC = 1,
+};
+
+#ifdef CONFIG_NVM
+
+#include <linux/blkdev.h>
+#include <linux/types.h>
+#include <linux/file.h>
+#include <linux/dmapool.h>
+
+enum {
+	/* HW Responsibilities */
+	NVM_RSP_L2P	= 1 << 0,
+	NVM_RSP_GC	= 1 << 1,
+	NVM_RSP_ECC	= 1 << 2,
+
+	/* Physical NVM Type */
+	NVM_NVMT_BLK	= 0,
+	NVM_NVMT_BYTE	= 1,
+
+	/* Internal IO Scheduling algorithm */
+	NVM_IOSCHED_CHANNEL	= 0,
+	NVM_IOSCHED_CHIP	= 1,
+
+	/* Status codes */
+	NVM_SUCCESS		= 0,
+	NVM_RSP_NOT_CHANGEABLE	= 1,
+};
+
+struct nvm_id_chnl {
+	u64	laddr_begin;
+	u64	laddr_end;
+	u32	oob_size;
+	u32	queue_size;
+	u32	gran_read;
+	u32	gran_write;
+	u32	gran_erase;
+	u32	t_r;
+	u32	t_sqr;
+	u32	t_w;
+	u32	t_sqw;
+	u32	t_e;
+	u16	chnl_parallelism;
+	u8	io_sched;
+	u8	res[133];
+};
+
+struct nvm_id {
+	u8	ver_id;
+	u8	nvm_type;
+	u16	nchannels;
+	struct nvm_id_chnl *chnls;
+};
+
+struct nvm_get_features {
+	u64	rsp;
+	u64	ext;
+};
+
+struct nvm_target {
+	struct list_head list;
+	struct nvm_tgt_type *type;
+	struct gendisk *disk;
+};
+
+struct nvm_tgt_instance {
+	struct nvm_tgt_type *tt;
+};
+
+struct nvm_rq {
+	struct nvm_tgt_instance *ins;
+	struct bio *bio;
+	union {
+		sector_t ppa;
+		sector_t *ppa_list;
+	};
+	/*DMA handler to be used by underlying devices supporting DMA*/
+	dma_addr_t dma_ppa_list;
+	uint8_t npages;
+};
+
+static inline struct nvm_rq *nvm_rq_from_pdu(void *pdu)
+{
+	return pdu - sizeof(struct nvm_rq);
+}
+
+static inline void *nvm_rq_to_pdu(struct nvm_rq *rqdata)
+{
+	return rqdata + 1;
+}
+
+struct nvm_block;
+
+typedef int (nvm_l2p_update_fn)(u64, u64, u64 *, void *);
+typedef int (nvm_bb_update_fn)(u32, void *, unsigned int, void *);
+typedef int (nvm_id_fn)(struct request_queue *, struct nvm_id *);
+typedef int (nvm_get_features_fn)(struct request_queue *,
+						struct nvm_get_features *);
+typedef int (nvm_set_rsp_fn)(struct request_queue *, u64);
+typedef int (nvm_get_l2p_tbl_fn)(struct request_queue *, u64, u64,
+				nvm_l2p_update_fn *, void *);
+typedef int (nvm_op_bb_tbl_fn)(struct request_queue *, int, unsigned int,
+				nvm_bb_update_fn *, void *);
+typedef int (nvm_submit_io_fn)(struct request_queue *, struct nvm_rq *);
+typedef int (nvm_erase_blk_fn)(struct request_queue *, sector_t);
+typedef void *(nvm_create_ppapool_fn)(struct request_queue *);
+typedef void (nvm_destroy_ppapool_fn)(void *);
+typedef void *(nvm_alloc_ppalist_fn)(struct request_queue *, void *, gfp_t,
+								dma_addr_t*);
+typedef void (nvm_free_ppalist_fn)(void *, void*, dma_addr_t);
+
+struct nvm_dev_ops {
+	nvm_id_fn		*identify;
+	nvm_get_features_fn	*get_features;
+	nvm_set_rsp_fn		*set_responsibility;
+	nvm_get_l2p_tbl_fn	*get_l2p_tbl;
+	nvm_op_bb_tbl_fn	*set_bb_tbl;
+	nvm_op_bb_tbl_fn	*get_bb_tbl;
+
+	nvm_submit_io_fn	*submit_io;
+	nvm_erase_blk_fn	*erase_block;
+
+	nvm_create_ppapool_fn	*create_ppa_pool;
+	nvm_destroy_ppapool_fn	*destroy_ppa_pool;
+	nvm_alloc_ppalist_fn	*alloc_ppalist;
+	nvm_free_ppalist_fn	*free_ppalist;
+
+	uint8_t			max_phys_sect;
+};
+
+struct nvm_lun {
+	int id;
+
+	int nr_pages_per_blk;
+	unsigned int nr_blocks;		/* end_block - start_block. */
+	unsigned int nr_free_blocks;	/* Number of unused blocks */
+
+	struct nvm_block *blocks;
+
+	spinlock_t lock;
+};
+
+struct nvm_block {
+	struct list_head list;
+	struct nvm_lun *lun;
+	unsigned long long id;
+
+	void *priv;
+	int type;
+};
+
+struct nvm_dev {
+	struct nvm_dev_ops *ops;
+
+	struct list_head devices;
+	struct list_head online_targets;
+
+	/* Block manager */
+	struct nvm_bm_type *bm;
+	void *bmp;
+
+	/* Target information */
+	int nr_luns;
+
+	/* Calculated/Cached values. These do not reflect the actual usable
+	 * blocks at run-time. */
+	unsigned long total_pages;
+	unsigned long total_blocks;
+	unsigned max_pages_per_blk;
+
+	uint32_t sector_size;
+
+	void *ppalist_pool;
+
+	/* Identity */
+	struct nvm_id identity;
+	struct nvm_get_features features;
+
+	/* Backend device */
+	struct request_queue *q;
+	char name[DISK_NAME_LEN];
+};
+
+typedef void (nvm_tgt_make_rq_fn)(struct request_queue *, struct bio *);
+typedef sector_t (nvm_tgt_capacity_fn)(void *);
+typedef void (nvm_tgt_end_io_fn)(struct nvm_rq *, int);
+typedef void *(nvm_tgt_init_fn)(struct nvm_dev *, struct gendisk *, int, int);
+typedef void (nvm_tgt_exit_fn)(void *);
+
+struct nvm_tgt_type {
+	const char *name;
+	unsigned int version[3];
+
+	/* target entry points */
+	nvm_tgt_make_rq_fn *make_rq;
+	nvm_tgt_capacity_fn *capacity;
+	nvm_tgt_end_io_fn *end_io;
+
+	/* module-specific init/teardown */
+	nvm_tgt_init_fn *init;
+	nvm_tgt_exit_fn *exit;
+
+	/* For internal use */
+	struct list_head list;
+};
+
+extern int nvm_register_target(struct nvm_tgt_type *);
+extern void nvm_unregister_target(struct nvm_tgt_type *);
+
+extern void *nvm_alloc_ppalist(struct nvm_dev *, gfp_t, dma_addr_t *);
+extern void nvm_free_ppalist(struct nvm_dev *, void *, dma_addr_t);
+
+typedef int (nvm_bm_register_fn)(struct nvm_dev *);
+typedef void (nvm_bm_unregister_fn)(struct nvm_dev *);
+typedef struct nvm_block *(nvm_bm_get_blk_fn)(struct nvm_dev *,
+					      struct nvm_lun *, unsigned long);
+typedef void (nvm_bm_put_blk_fn)(struct nvm_dev *, struct nvm_block *);
+typedef int (nvm_bm_open_blk_fn)(struct nvm_dev *, struct nvm_block *);
+typedef int (nvm_bm_close_blk_fn)(struct nvm_dev *, struct nvm_block *);
+typedef void (nvm_bm_flush_blk_fn)(struct nvm_dev *, struct nvm_block *);
+typedef int (nvm_bm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *);
+typedef void (nvm_bm_end_io_fn)(struct nvm_rq *, int);
+typedef int (nvm_bm_erase_blk_fn)(struct nvm_dev *, struct nvm_block *);
+typedef int (nvm_bm_register_prog_err_fn)(struct nvm_dev *,
+	     void (prog_err_fn)(struct nvm_dev *, struct nvm_block *));
+typedef int (nvm_bm_save_state_fn)(struct file *);
+typedef int (nvm_bm_restore_state_fn)(struct file *);
+typedef struct nvm_lun *(nvm_bm_get_luns_fn)(struct nvm_dev *, int, int);
+typedef void (nvm_bm_free_blocks_print_fn)(struct nvm_dev *);
+
+struct nvm_bm_type {
+	const char *name;
+	unsigned int version[3];
+
+	nvm_bm_register_fn *register_bm;
+	nvm_bm_unregister_fn *unregister_bm;
+
+	/* Block administration callbacks */
+	nvm_bm_get_blk_fn *get_blk;
+	nvm_bm_put_blk_fn *put_blk;
+	nvm_bm_open_blk_fn *open_blk;
+	nvm_bm_close_blk_fn *close_blk;
+	nvm_bm_flush_blk_fn *flush_blk;
+
+	nvm_bm_submit_io_fn *submit_io;
+	nvm_bm_end_io_fn *end_io;
+	nvm_bm_erase_blk_fn *erase_blk;
+
+	/* State management for debugging purposes */
+	nvm_bm_save_state_fn *save_state;
+	nvm_bm_restore_state_fn *restore_state;
+
+	/* Configuration management */
+	nvm_bm_get_luns_fn *get_luns;
+
+	/* Statistics */
+	nvm_bm_free_blocks_print_fn *free_blocks_print;
+	struct list_head list;
+};
+
+extern int nvm_register_bm(struct nvm_bm_type *);
+extern void nvm_unregister_bm(struct nvm_bm_type *);
+
+extern struct nvm_block *nvm_get_blk(struct nvm_dev *, struct nvm_lun *,
+								unsigned long);
+extern void nvm_put_blk(struct nvm_dev *, struct nvm_block *);
+extern int nvm_erase_blk(struct nvm_dev *, struct nvm_block *);
+
+extern int nvm_register(struct request_queue *, char *,
+						struct nvm_dev_ops *);
+extern void nvm_unregister(char *);
+
+extern int nvm_submit_io(struct nvm_dev *, struct nvm_rq *);
+
+/* We currently assume that we the lightnvm device is accepting data in 512
+ * bytes chunks. This should be set to the smallest command size available for a
+ * given device.
+ */
+#define NVM_SECTOR (512)
+#define EXPOSED_PAGE_SIZE (4096)
+
+#define NR_PHY_IN_LOG (EXPOSED_PAGE_SIZE / NVM_SECTOR)
+
+#define NVM_MSG_PREFIX "nvm"
+#define ADDR_EMPTY (~0ULL)
+
+static inline unsigned long nvm_get_rq_flags(struct request *rq)
+{
+	return (unsigned long)rq->cmd;
+}
+
+#else /* CONFIG_NVM */
+
+struct nvm_dev_ops;
+struct nvm_dev;
+struct nvm_lun;
+struct nvm_block;
+struct nvm_rq {
+};
+struct nvm_tgt_type;
+struct nvm_tgt_instance;
+
+static inline struct nvm_tgt_type *nvm_find_target_type(const char *c)
+{
+	return NULL;
+}
+static inline int nvm_register(struct request_queue *q, char *disk_name,
+							struct nvm_dev_ops *ops)
+{
+	return -EINVAL;
+}
+static inline void nvm_unregister(char *disk_name) {}
+static inline struct nvm_block *nvm_get_blk(struct nvm_dev *dev,
+				struct nvm_lun *lun, unsigned long flags)
+{
+	return NULL;
+}
+static inline void nvm_put_blk(struct nvm_dev *dev, struct nvm_block *blk) {}
+static inline int nvm_erase_blk(struct nvm_dev *dev, struct nvm_block *blk)
+{
+	return -EINVAL;
+}
+
+#endif /* CONFIG_NVM */
+#endif /* LIGHTNVM.H */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v7 2/5] lightnvm: Hybrid Open-Channel SSD RRPC target
  2015-08-07 14:29 ` Matias Bjørling
  (?)
@ 2015-08-07 14:29   ` Matias Bjørling
  -1 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-08-07 14:29 UTC (permalink / raw)
  To: hch, axboe, linux-fsdevel, linux-kernel, linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

This target implements a simple strategy FTL for Open-Channel SSDs.
It does round-robin selection across channels and luns. It uses a
simple greedy cost-based garbage collector and exposes the physical
flash as a block device.

Signed-off-by: Javier González <jg@lightnvm.io>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
---
 drivers/lightnvm/Kconfig  |   10 +
 drivers/lightnvm/Makefile |    1 +
 drivers/lightnvm/core.c   |    3 +-
 drivers/lightnvm/rrpc.c   | 1296 +++++++++++++++++++++++++++++++++++++++++++++
 drivers/lightnvm/rrpc.h   |  236 +++++++++
 include/linux/lightnvm.h  |    5 +-
 6 files changed, 1547 insertions(+), 4 deletions(-)
 create mode 100644 drivers/lightnvm/rrpc.c
 create mode 100644 drivers/lightnvm/rrpc.h

diff --git a/drivers/lightnvm/Kconfig b/drivers/lightnvm/Kconfig
index 1f8412c..ab1fe57 100644
--- a/drivers/lightnvm/Kconfig
+++ b/drivers/lightnvm/Kconfig
@@ -14,3 +14,13 @@ menuconfig NVM
 	  If you say N, all options in this submenu will be skipped and disabled
 	  only do this if you know what you are doing.
 
+if NVM
+
+config NVM_RRPC
+	tristate "Round-robin Hybrid Open-Channel SSD target"
+	---help---
+	Allows an open-channel SSD to be exposed as a block device to the
+	host. The target is implemented using a linear mapping table and
+	cost-based garbage collection. It is optimized for 4K IO sizes.
+
+endif # NVM
diff --git a/drivers/lightnvm/Makefile b/drivers/lightnvm/Makefile
index 38185e9..b2a39e2 100644
--- a/drivers/lightnvm/Makefile
+++ b/drivers/lightnvm/Makefile
@@ -3,3 +3,4 @@
 #
 
 obj-$(CONFIG_NVM)		:= core.o
+obj-$(CONFIG_NVM_RRPC)		+= rrpc.o
diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 6499922..5e4c2b8 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -169,7 +169,7 @@ static void nvm_core_free(struct nvm_dev *dev)
 static int nvm_core_init(struct nvm_dev *dev)
 {
 	dev->nr_luns = dev->identity.nchannels;
-	dev->sector_size = EXPOSED_PAGE_SIZE;
+	dev->sector_size = dev->ops->dev_sector_size;
 	INIT_LIST_HEAD(&dev->online_targets);
 
 	return 0;
@@ -541,6 +541,7 @@ int nvm_register(struct request_queue *q, char *disk_name,
 
 	dev->q = q;
 	dev->ops = ops;
+	dev->ops->dev_sector_size = DEV_EXPOSED_PAGE_SIZE;
 	strncpy(dev->name, disk_name, DISK_NAME_LEN);
 
 	ret = nvm_init(dev);
diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c
new file mode 100644
index 0000000..5843383
--- /dev/null
+++ b/drivers/lightnvm/rrpc.c
@@ -0,0 +1,1296 @@
+/*
+ * Copyright (C) 2015 IT University of Copenhagen
+ * Initial release: Matias Bjorling <mabj@itu.dk>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Implementation of a Round-robin page-based Hybrid FTL for Open-channel SSDs.
+ */
+
+#include "rrpc.h"
+
+static struct kmem_cache *rrpc_gcb_cache, *rrpc_rq_cache;
+static DECLARE_RWSEM(rrpc_lock);
+
+static int rrpc_submit_io(struct rrpc *rrpc, struct bio *bio,
+				struct nvm_rq *rqd, unsigned long flags);
+
+#define rrpc_for_each_lun(rrpc, rlun, i) \
+		for ((i) = 0, rlun = &(rrpc)->luns[0]; \
+			(i) < (rrpc)->nr_luns; (i)++, rlun = &(rrpc)->luns[(i)])
+
+static void rrpc_page_invalidate(struct rrpc *rrpc, struct rrpc_addr *a)
+{
+	struct rrpc_block *rblk = a->rblk;
+	unsigned int pg_offset;
+
+	lockdep_assert_held(&rrpc->rev_lock);
+
+	if (a->addr == ADDR_EMPTY || !rblk)
+		return;
+
+	spin_lock(&rblk->lock);
+
+	pg_offset = a->addr % rblk->parent->lun->nr_pages_per_blk;
+	WARN_ON(test_and_set_bit(pg_offset, rblk->invalid_pages));
+	rblk->nr_invalid_pages++;
+
+	spin_unlock(&rblk->lock);
+
+	rrpc->rev_trans_map[a->addr - rrpc->poffset].addr = ADDR_EMPTY;
+}
+
+static void rrpc_invalidate_range(struct rrpc *rrpc, sector_t slba,
+								unsigned len)
+{
+	sector_t i;
+
+	spin_lock(&rrpc->rev_lock);
+	for (i = slba; i < slba + len; i++) {
+		struct rrpc_addr *gp = &rrpc->trans_map[i];
+
+		rrpc_page_invalidate(rrpc, gp);
+		gp->rblk = NULL;
+	}
+	spin_unlock(&rrpc->rev_lock);
+}
+
+static struct nvm_rq *rrpc_inflight_laddr_acquire(struct rrpc *rrpc,
+					sector_t laddr, unsigned int pages)
+{
+	struct nvm_rq *rqd;
+	struct rrpc_inflight_rq *inf;
+
+	rqd = mempool_alloc(rrpc->rq_pool, GFP_ATOMIC);
+	if (!rqd)
+		return ERR_PTR(-ENOMEM);
+
+	inf = rrpc_get_inflight_rq(rqd);
+	if (rrpc_lock_laddr(rrpc, laddr, pages, inf)) {
+		mempool_free(rqd, rrpc->rq_pool);
+		return NULL;
+	}
+
+	return rqd;
+}
+
+static void rrpc_inflight_laddr_release(struct rrpc *rrpc, struct nvm_rq *rqd)
+{
+	struct rrpc_inflight_rq *inf = rrpc_get_inflight_rq(rqd);
+
+	rrpc_unlock_laddr(rrpc, inf);
+
+	mempool_free(rqd, rrpc->rq_pool);
+}
+
+static void rrpc_discard(struct rrpc *rrpc, struct bio *bio)
+{
+	sector_t slba = bio->bi_iter.bi_sector / NR_PHY_IN_LOG;
+	sector_t len = bio->bi_iter.bi_size / RRPC_EXPOSED_PAGE_SIZE;
+	struct nvm_rq *rqd;
+
+	do {
+		rqd = rrpc_inflight_laddr_acquire(rrpc, slba, len);
+		schedule();
+	} while (!rqd);
+
+	if (IS_ERR(rqd)) {
+		pr_err("rrpc: unable to acquire inflight IO\n");
+		bio_io_error(bio);
+		return;
+	}
+
+	rrpc_invalidate_range(rrpc, slba, len);
+	rrpc_inflight_laddr_release(rrpc, rqd);
+}
+
+static int block_is_full(struct rrpc_lun *rlun, struct rrpc_block *rblk)
+{
+	struct nvm_lun *lun = rlun->parent;
+
+	return (rblk->next_page == lun->nr_pages_per_blk);
+}
+
+static sector_t block_to_addr(struct rrpc_block *rblk)
+{
+	struct nvm_block *blk = rblk->parent;
+	struct nvm_lun *lun = rblk->parent->lun;
+
+	return blk->id * lun->nr_pages_per_blk;
+}
+
+/* requires lun->lock taken */
+static void rrpc_set_lun_cur(struct rrpc_lun *rlun, struct rrpc_block *rblk)
+{
+	BUG_ON(!rblk);
+
+	if (rlun->cur) {
+		spin_lock(&rlun->cur->lock);
+		WARN_ON(!block_is_full(rlun, rlun->cur));
+		spin_unlock(&rlun->cur->lock);
+	}
+	rlun->cur = rblk;
+}
+
+static struct rrpc_block *rrpc_get_blk(struct rrpc *rrpc, struct rrpc_lun *rlun,
+							unsigned long flags)
+{
+	struct nvm_block *blk;
+	struct rrpc_block *rblk;
+
+	blk = nvm_get_blk(rrpc->dev, rlun->parent, 0);
+	if (!blk)
+		return NULL;
+
+	rblk = &rlun->blocks[blk->id];
+	blk->priv = rblk;
+
+	bitmap_zero(rblk->invalid_pages, rlun->parent->nr_pages_per_blk);
+	rblk->next_page = 0;
+	rblk->nr_invalid_pages = 0;
+	atomic_set(&rblk->data_cmnt_size, 0);
+
+	return rblk;
+}
+
+static void rrpc_put_blk(struct rrpc *rrpc, struct rrpc_block *rblk)
+{
+	nvm_put_blk(rrpc->dev, rblk->parent);
+}
+
+static struct rrpc_lun *get_next_lun(struct rrpc *rrpc)
+{
+	int next = atomic_inc_return(&rrpc->next_lun);
+
+	return &rrpc->luns[next % rrpc->nr_luns];
+}
+
+static void rrpc_gc_kick(struct rrpc *rrpc)
+{
+	struct rrpc_lun *rlun;
+	unsigned int i;
+
+	for (i = 0; i < rrpc->nr_luns; i++) {
+		rlun = &rrpc->luns[i];
+		queue_work(rrpc->krqd_wq, &rlun->ws_gc);
+	}
+}
+
+/*
+ * timed GC every interval.
+ */
+static void rrpc_gc_timer(unsigned long data)
+{
+	struct rrpc *rrpc = (struct rrpc *)data;
+
+	rrpc_gc_kick(rrpc);
+	mod_timer(&rrpc->gc_timer, jiffies + msecs_to_jiffies(10));
+}
+
+static void rrpc_end_sync_bio(struct bio *bio, int error)
+{
+	struct completion *waiting = bio->bi_private;
+
+	if (error)
+		pr_err("nvm: gc request failed (%u).\n", error);
+
+	complete(waiting);
+}
+
+/*
+ * rrpc_move_valid_pages -- migrate live data off the block
+ * @rrpc: the 'rrpc' structure
+ * @block: the block from which to migrate live pages
+ *
+ * Description:
+ *   GC algorithms may call this function to migrate remaining live
+ *   pages off the block prior to erasing it. This function blocks
+ *   further execution until the operation is complete.
+ */
+static int rrpc_move_valid_pages(struct rrpc *rrpc, struct rrpc_block *rblk)
+{
+	struct request_queue *q = rrpc->dev->q;
+	struct rrpc_rev_addr *rev;
+	struct nvm_rq *rqd;
+	struct bio *bio;
+	struct page *page;
+	int slot;
+	int nr_pgs_per_blk = rblk->parent->lun->nr_pages_per_blk;
+	sector_t phys_addr;
+	DECLARE_COMPLETION_ONSTACK(wait);
+
+	if (bitmap_full(rblk->invalid_pages, nr_pgs_per_blk))
+		return 0;
+
+	bio = bio_alloc(GFP_NOIO, 1);
+	if (!bio) {
+		pr_err("nvm: could not alloc bio to gc\n");
+		return -ENOMEM;
+	}
+
+	page = mempool_alloc(rrpc->page_pool, GFP_NOIO);
+
+	while ((slot = find_first_zero_bit(rblk->invalid_pages,
+					    nr_pgs_per_blk)) < nr_pgs_per_blk) {
+
+		/* Lock laddr */
+		phys_addr = (rblk->parent->id * nr_pgs_per_blk) + slot;
+
+try:
+		spin_lock(&rrpc->rev_lock);
+		/* Get logical address from physical to logical table */
+		rev = &rrpc->rev_trans_map[phys_addr - rrpc->poffset];
+		/* already updated by previous regular write */
+		if (rev->addr == ADDR_EMPTY) {
+			spin_unlock(&rrpc->rev_lock);
+			continue;
+		}
+
+		rqd = rrpc_inflight_laddr_acquire(rrpc, rev->addr, 1);
+		if (IS_ERR_OR_NULL(rqd)) {
+			spin_unlock(&rrpc->rev_lock);
+			schedule();
+			goto try;
+		}
+
+		spin_unlock(&rrpc->rev_lock);
+
+		/* Perform read to do GC */
+		bio->bi_iter.bi_sector = rrpc_get_sector(rev->addr);
+		bio->bi_rw = READ;
+		bio->bi_private = &wait;
+		bio->bi_end_io = rrpc_end_sync_bio;
+
+		/* TODO: may fail when EXP_PG_SIZE > PAGE_SIZE */
+		bio_add_pc_page(q, bio, page, RRPC_EXPOSED_PAGE_SIZE, 0);
+
+		if (rrpc_submit_io(rrpc, bio, rqd, NVM_IOTYPE_GC)) {
+			pr_err("rrpc: gc read failed.\n");
+			rrpc_inflight_laddr_release(rrpc, rqd);
+			goto finished;
+		}
+		wait_for_completion_io(&wait);
+
+		bio_reset(bio);
+		reinit_completion(&wait);
+
+		bio->bi_iter.bi_sector = rrpc_get_sector(rev->addr);
+		bio->bi_rw = WRITE;
+		bio->bi_private = &wait;
+		bio->bi_end_io = rrpc_end_sync_bio;
+
+		bio_add_pc_page(q, bio, page, RRPC_EXPOSED_PAGE_SIZE, 0);
+
+		/* turn the command around and write the data back to a new
+		 * address */
+		if (rrpc_submit_io(rrpc, bio, rqd, NVM_IOTYPE_GC)) {
+			pr_err("rrpc: gc write failed.\n");
+			rrpc_inflight_laddr_release(rrpc, rqd);
+			goto finished;
+		}
+		wait_for_completion_io(&wait);
+
+		rrpc_inflight_laddr_release(rrpc, rqd);
+
+		bio_reset(bio);
+	}
+
+finished:
+	mempool_free(page, rrpc->page_pool);
+	bio_put(bio);
+
+	if (!bitmap_full(rblk->invalid_pages, nr_pgs_per_blk)) {
+		pr_err("nvm: failed to garbage collect block\n");
+		return -EIO;
+	}
+
+	return 0;
+}
+
+static void rrpc_block_gc(struct work_struct *work)
+{
+	struct rrpc_block_gc *gcb = container_of(work, struct rrpc_block_gc,
+									ws_gc);
+	struct rrpc *rrpc = gcb->rrpc;
+	struct rrpc_block *rblk = gcb->rblk;
+	struct nvm_dev *dev = rrpc->dev;
+
+	pr_debug("nvm: block '%llu' being reclaimed\n", rblk->parent->id);
+
+	if (rrpc_move_valid_pages(rrpc, rblk))
+		goto done;
+
+	nvm_erase_blk(dev, rblk->parent);
+	rrpc_put_blk(rrpc, rblk);
+done:
+	mempool_free(gcb, rrpc->gcb_pool);
+}
+
+/* the block with highest number of invalid pages, will be in the beginning
+ * of the list */
+static struct rrpc_block *rblock_max_invalid(struct rrpc_block *ra,
+							struct rrpc_block *rb)
+{
+	if (ra->nr_invalid_pages == rb->nr_invalid_pages)
+		return ra;
+
+	return (ra->nr_invalid_pages < rb->nr_invalid_pages) ? rb : ra;
+}
+
+/* linearly find the block with highest number of invalid pages
+ * requires lun->lock */
+static struct rrpc_block *block_prio_find_max(struct rrpc_lun *rlun)
+{
+	struct list_head *prio_list = &rlun->prio_list;
+	struct rrpc_block *rblock, *max;
+
+	BUG_ON(list_empty(prio_list));
+
+	max = list_first_entry(prio_list, struct rrpc_block, prio);
+	list_for_each_entry(rblock, prio_list, prio)
+		max = rblock_max_invalid(max, rblock);
+
+	return max;
+}
+
+static void rrpc_lun_gc(struct work_struct *work)
+{
+	struct rrpc_lun *rlun = container_of(work, struct rrpc_lun, ws_gc);
+	struct rrpc *rrpc = rlun->rrpc;
+	struct nvm_lun *lun = rlun->parent;
+	struct rrpc_block_gc *gcb;
+	unsigned int nr_blocks_need;
+
+	nr_blocks_need = lun->nr_blocks / GC_LIMIT_INVERSE;
+
+	if (nr_blocks_need < rrpc->nr_luns)
+		nr_blocks_need = rrpc->nr_luns;
+
+	spin_lock(&lun->lock);
+	while (nr_blocks_need > lun->nr_free_blocks &&
+					!list_empty(&rlun->prio_list)) {
+		struct rrpc_block *rblock = block_prio_find_max(rlun);
+		struct nvm_block *block = rblock->parent;
+
+		if (!rblock->nr_invalid_pages)
+			break;
+
+		list_del_init(&rblock->prio);
+
+		BUG_ON(!block_is_full(rlun, rblock));
+
+		pr_debug("rrpc: selected block '%llu' for GC\n", block->id);
+
+		gcb = mempool_alloc(rrpc->gcb_pool, GFP_ATOMIC);
+		if (!gcb)
+			break;
+
+		gcb->rrpc = rrpc;
+		gcb->rblk = rblock;
+		INIT_WORK(&gcb->ws_gc, rrpc_block_gc);
+
+		queue_work(rrpc->kgc_wq, &gcb->ws_gc);
+
+		nr_blocks_need--;
+	}
+	spin_unlock(&lun->lock);
+
+	/* TODO: Hint that request queue can be started again */
+}
+
+static void rrpc_gc_queue(struct work_struct *work)
+{
+	struct rrpc_block_gc *gcb = container_of(work, struct rrpc_block_gc,
+									ws_gc);
+	struct rrpc *rrpc = gcb->rrpc;
+	struct rrpc_block *rblk = gcb->rblk;
+	struct nvm_lun *lun = rblk->parent->lun;
+	struct rrpc_lun *rlun = &rrpc->luns[lun->id - rrpc->lun_offset];
+
+	spin_lock(&rlun->lock);
+	list_add_tail(&rblk->prio, &rlun->prio_list);
+	spin_unlock(&rlun->lock);
+
+	mempool_free(gcb, rrpc->gcb_pool);
+	pr_debug("nvm: block '%llu' is full, allow GC (sched)\n",
+							rblk->parent->id);
+}
+
+static const struct block_device_operations rrpc_fops = {
+	.owner		= THIS_MODULE,
+};
+
+static struct rrpc_lun *rrpc_get_lun_rr(struct rrpc *rrpc, int is_gc)
+{
+	unsigned int i;
+	struct rrpc_lun *rlun, *max_free;
+
+	if (!is_gc)
+		return get_next_lun(rrpc);
+
+	/* during GC, we don't care about RR, instead we want to make
+	 * sure that we maintain evenness between the block luns. */
+	max_free = &rrpc->luns[0];
+	/* prevent GC-ing lun from devouring pages of a lun with
+	 * little free blocks. We don't take the lock as we only need an
+	 * estimate. */
+	rrpc_for_each_lun(rrpc, rlun, i) {
+		if (rlun->parent->nr_free_blocks >
+					max_free->parent->nr_free_blocks)
+			max_free = rlun;
+	}
+
+	return max_free;
+}
+
+static struct rrpc_addr *rrpc_update_map(struct rrpc *rrpc, sector_t laddr,
+					struct rrpc_block *rblk, sector_t paddr)
+{
+	struct rrpc_addr *gp;
+	struct rrpc_rev_addr *rev;
+
+	BUG_ON(laddr >= rrpc->nr_pages);
+
+	gp = &rrpc->trans_map[laddr];
+	spin_lock(&rrpc->rev_lock);
+	if (gp->rblk)
+		rrpc_page_invalidate(rrpc, gp);
+
+	gp->addr = paddr;
+	gp->rblk = rblk;
+
+	rev = &rrpc->rev_trans_map[gp->addr - rrpc->poffset];
+	rev->addr = laddr;
+	spin_unlock(&rrpc->rev_lock);
+
+	return gp;
+}
+
+static sector_t rrpc_alloc_addr(struct rrpc_lun *rlun, struct rrpc_block *rblk)
+{
+	sector_t addr = ADDR_EMPTY;
+
+	spin_lock(&rblk->lock);
+	if (block_is_full(rlun, rblk))
+		goto out;
+
+	addr = block_to_addr(rblk) + rblk->next_page;
+
+	rblk->next_page++;
+out:
+	spin_unlock(&rblk->lock);
+	return addr;
+}
+
+/* Simple round-robin Logical to physical address translation.
+ *
+ * Retrieve the mapping using the active append point. Then update the ap for
+ * the next write to the disk.
+ *
+ * Returns rrpc_addr with the physical address and block. Remember to return to
+ * rrpc->addr_cache when request is finished.
+ */
+static struct rrpc_addr *rrpc_map_page(struct rrpc *rrpc, sector_t laddr,
+								int is_gc)
+{
+	struct rrpc_lun *rlun;
+	struct rrpc_block *rblk;
+	struct nvm_lun *lun;
+	sector_t paddr;
+
+	rlun = rrpc_get_lun_rr(rrpc, is_gc);
+	lun = rlun->parent;
+
+	if (!is_gc && lun->nr_free_blocks < rrpc->nr_luns * 4)
+		return NULL;
+
+	spin_lock(&rlun->lock);
+
+	rblk = rlun->cur;
+retry:
+	paddr = rrpc_alloc_addr(rlun, rblk);
+
+	if (paddr == ADDR_EMPTY) {
+		rblk = rrpc_get_blk(rrpc, rlun, 0);
+		if (rblk) {
+			rrpc_set_lun_cur(rlun, rblk);
+			goto retry;
+		}
+
+		if (is_gc) {
+			/* retry from emergency gc block */
+			paddr = rrpc_alloc_addr(rlun, rlun->gc_cur);
+			if (paddr == ADDR_EMPTY) {
+				rblk = rrpc_get_blk(rrpc, rlun, 1);
+				if (!rblk) {
+					pr_err("rrpc: no more blocks");
+					goto err;
+				}
+
+				rlun->gc_cur = rblk;
+				paddr = rrpc_alloc_addr(rlun, rlun->gc_cur);
+			}
+			rblk = rlun->gc_cur;
+		}
+	}
+
+	spin_unlock(&rlun->lock);
+	return rrpc_update_map(rrpc, laddr, rblk, paddr);
+err:
+	spin_unlock(&rlun->lock);
+	return NULL;
+}
+
+static void rrpc_run_gc(struct rrpc *rrpc, struct rrpc_block *rblk)
+{
+	struct rrpc_block_gc *gcb;
+
+	gcb = mempool_alloc(rrpc->gcb_pool, GFP_ATOMIC);
+	if (!gcb) {
+		pr_err("rrpc: unable to queue block for gc.");
+		return;
+	}
+
+	gcb->rrpc = rrpc;
+	gcb->rblk = rblk;
+
+	INIT_WORK(&gcb->ws_gc, rrpc_gc_queue);
+	queue_work(rrpc->kgc_wq, &gcb->ws_gc);
+}
+
+static void rrpc_end_io_write(struct rrpc *rrpc, struct rrpc_rq *rrqd,
+						sector_t laddr, uint8_t npages)
+{
+	struct rrpc_addr *p;
+	struct rrpc_block *rblk;
+	struct nvm_lun *lun;
+	int cmnt_size, i;
+
+	for (i = 0; i < npages; i++) {
+		p = &rrpc->trans_map[laddr + i];
+		rblk = p->rblk;
+		lun = rblk->parent->lun;
+
+		cmnt_size = atomic_inc_return(&rblk->data_cmnt_size);
+		if (unlikely(cmnt_size == lun->nr_pages_per_blk))
+			rrpc_run_gc(rrpc, rblk);
+	}
+}
+
+static void rrpc_end_io(struct nvm_rq *rqd, int error)
+{
+	struct rrpc *rrpc = container_of(rqd->ins, struct rrpc, instance);
+	struct rrpc_rq *rrqd = nvm_rq_to_pdu(rqd);
+	uint8_t npages = rqd->npages;
+	sector_t laddr = rrpc_get_laddr(rqd->bio) - npages;
+
+	if (bio_data_dir(rqd->bio) == WRITE)
+		rrpc_end_io_write(rrpc, rrqd, laddr, npages);
+
+	if (rrqd->flags & NVM_IOTYPE_GC)
+		return;
+
+	rrpc_unlock_rq(rrpc, rqd);
+	bio_put(rqd->bio);
+
+	if (npages > 1)
+		nvm_free_ppalist(rrpc->dev, rqd->ppa_list, rqd->dma_ppa_list);
+
+
+	mempool_free(rqd, rrpc->rq_pool);
+}
+
+static int rrpc_read_ppalist_rq(struct rrpc *rrpc, struct bio *bio,
+			struct nvm_rq *rqd, unsigned long flags, int npages)
+{
+	struct rrpc_inflight_rq *r = rrpc_get_inflight_rq(rqd);
+	struct rrpc_addr *gp;
+	sector_t laddr = rrpc_get_laddr(bio);
+	int is_gc = flags & NVM_IOTYPE_GC;
+	int i;
+
+	if (!is_gc && rrpc_lock_rq(rrpc, bio, rqd)) {
+		nvm_free_ppalist(rrpc->dev, rqd->ppa_list, rqd->dma_ppa_list);
+		return NVM_IO_REQUEUE;
+	}
+
+	for (i = 0; i < npages; i++) {
+		/* We assume that mapping occurs at 4KB granularity */
+		BUG_ON(!(laddr + i >= 0 && laddr + i < rrpc->nr_pages));
+		gp = &rrpc->trans_map[laddr + i];
+
+		if (gp->rblk) {
+			rqd->ppa_list[i] = gp->addr;
+		} else {
+			BUG_ON(is_gc);
+			rrpc_unlock_laddr(rrpc, r);
+			nvm_free_ppalist(rrpc->dev, rqd->ppa_list,
+							rqd->dma_ppa_list);
+			return NVM_IO_DONE;
+		}
+	}
+
+	return NVM_IO_OK;
+}
+
+static int rrpc_read_rq(struct rrpc *rrpc, struct bio *bio, struct nvm_rq *rqd,
+							unsigned long flags)
+{
+	struct rrpc_rq *rrqd = nvm_rq_to_pdu(rqd);
+	int is_gc = flags & NVM_IOTYPE_GC;
+	sector_t laddr = rrpc_get_laddr(bio);
+	struct rrpc_addr *gp;
+
+	if (!is_gc && rrpc_lock_rq(rrpc, bio, rqd))
+		return NVM_IO_REQUEUE;
+
+	BUG_ON(!(laddr >= 0 && laddr < rrpc->nr_pages));
+	gp = &rrpc->trans_map[laddr];
+
+	if (gp->rblk) {
+		rqd->ppa = rrpc_get_sector(gp->addr);
+	} else {
+		BUG_ON(is_gc);
+		rrpc_unlock_rq(rrpc, rqd);
+		return NVM_IO_DONE;
+	}
+
+	rrqd->addr = gp;
+
+	return NVM_IO_OK;
+}
+
+static int rrpc_write_ppalist_rq(struct rrpc *rrpc, struct bio *bio,
+			struct nvm_rq *rqd, unsigned long flags, int npages)
+{
+	struct rrpc_inflight_rq *r = rrpc_get_inflight_rq(rqd);
+	struct rrpc_addr *p;
+	sector_t laddr = rrpc_get_laddr(bio);
+	int is_gc = flags & NVM_IOTYPE_GC;
+	int i;
+
+	if (!is_gc && rrpc_lock_rq(rrpc, bio, rqd)) {
+		nvm_free_ppalist(rrpc->dev, rqd->ppa_list, rqd->dma_ppa_list);
+		return NVM_IO_REQUEUE;
+	}
+
+	for (i = 0; i < npages; i++) {
+		/* We assume that mapping occurs at 4KB granularity */
+		p = rrpc_map_page(rrpc, laddr + i, is_gc);
+		if (!p) {
+			BUG_ON(is_gc);
+			rrpc_unlock_laddr(rrpc, r);
+			nvm_free_ppalist(rrpc->dev, rqd->ppa_list,
+							rqd->dma_ppa_list);
+			rrpc_gc_kick(rrpc);
+			return NVM_IO_REQUEUE;
+		}
+
+		rqd->ppa_list[i] = p->addr;
+	}
+
+	return NVM_IO_OK;
+}
+
+static int rrpc_write_rq(struct rrpc *rrpc, struct bio *bio,
+				struct nvm_rq *rqd, unsigned long flags)
+{
+	struct rrpc_rq *rrqd = nvm_rq_to_pdu(rqd);
+	struct rrpc_addr *p;
+	int is_gc = flags & NVM_IOTYPE_GC;
+	sector_t laddr = rrpc_get_laddr(bio);
+
+	if (!is_gc && rrpc_lock_rq(rrpc, bio, rqd))
+		return NVM_IO_REQUEUE;
+
+	p = rrpc_map_page(rrpc, laddr, is_gc);
+	if (!p) {
+		BUG_ON(is_gc);
+		rrpc_unlock_rq(rrpc, rqd);
+		rrpc_gc_kick(rrpc);
+		return NVM_IO_REQUEUE;
+	}
+
+	rqd->ppa = rrpc_get_sector(p->addr);
+	rrqd->addr = p;
+
+	return NVM_IO_OK;
+}
+
+static int rrpc_setup_rq(struct rrpc *rrpc, struct bio *bio,
+			struct nvm_rq *rqd, unsigned long flags, uint8_t npages)
+{
+	if (npages > 1) {
+		rqd->ppa_list = nvm_alloc_ppalist(rrpc->dev, GFP_KERNEL,
+							&rqd->dma_ppa_list);
+		if (!rqd->ppa_list) {
+			pr_err("rrpc: not able to allocate ppa list\n");
+			return NVM_IO_ERR;
+		}
+
+		if (bio_rw(bio) == WRITE)
+			return rrpc_write_ppalist_rq(rrpc, bio, rqd, flags,
+									npages);
+
+		return rrpc_read_ppalist_rq(rrpc, bio, rqd, flags, npages);
+	}
+
+	if (bio_rw(bio) == WRITE)
+		return rrpc_write_rq(rrpc, bio, rqd, flags);
+
+	return rrpc_read_rq(rrpc, bio, rqd, flags);
+}
+
+static int rrpc_submit_io(struct rrpc *rrpc, struct bio *bio,
+				struct nvm_rq *rqd, unsigned long flags)
+{
+	int err;
+	struct rrpc_rq *rrq = nvm_rq_to_pdu(rqd);
+	uint8_t npages = rrpc_get_pages(bio);
+
+	err = rrpc_setup_rq(rrpc, bio, rqd, flags, npages);
+	if (err)
+		return err;
+
+	bio_get(bio);
+	rqd->bio = bio;
+	rqd->ins = &rrpc->instance;
+	rqd->npages = npages;
+	rrq->flags = flags;
+
+	err = nvm_submit_io(rrpc->dev, rqd);
+	if (err) {
+		pr_err("rrpc: IO submission failed: %d\n", err);
+		return NVM_IO_ERR;
+	}
+
+	return NVM_IO_OK;
+}
+
+static void rrpc_make_rq(struct request_queue *q, struct bio *bio)
+{
+	struct rrpc *rrpc = q->queuedata;
+	struct nvm_rq *rqd;
+	int err;
+
+	if (bio->bi_rw & REQ_DISCARD) {
+		rrpc_discard(rrpc, bio);
+		return;
+	}
+
+	rqd = mempool_alloc(rrpc->rq_pool, GFP_KERNEL);
+	if (!rqd) {
+		pr_err_ratelimited("rrpc: not able to queue bio.");
+		bio_io_error(bio);
+		return;
+	}
+
+	err = rrpc_submit_io(rrpc, bio, rqd, NVM_IOTYPE_NONE);
+	switch (err) {
+	case NVM_IO_OK:
+		return;
+	case NVM_IO_ERR:
+		if (rqd->ppa_list)
+			nvm_free_ppalist(rrpc->dev, rqd->ppa_list,
+							rqd->dma_ppa_list);
+		bio_io_error(bio);
+		break;
+	case NVM_IO_DONE:
+		bio_endio(bio, 0);
+		break;
+	case NVM_IO_REQUEUE:
+		spin_lock(&rrpc->bio_lock);
+		bio_list_add(&rrpc->requeue_bios, bio);
+		spin_unlock(&rrpc->bio_lock);
+		queue_work(rrpc->kgc_wq, &rrpc->ws_requeue);
+		break;
+	}
+
+	mempool_free(rqd, rrpc->rq_pool);
+}
+
+static void rrpc_requeue(struct work_struct *work)
+{
+	struct rrpc *rrpc = container_of(work, struct rrpc, ws_requeue);
+	struct bio_list bios;
+	struct bio *bio;
+
+	bio_list_init(&bios);
+
+	spin_lock(&rrpc->bio_lock);
+	bio_list_merge(&bios, &rrpc->requeue_bios);
+	bio_list_init(&rrpc->requeue_bios);
+	spin_unlock(&rrpc->bio_lock);
+
+	while ((bio = bio_list_pop(&bios)))
+		rrpc_make_rq(rrpc->disk->queue, bio);
+}
+
+static void rrpc_gc_free(struct rrpc *rrpc)
+{
+	struct rrpc_lun *rlun;
+	int i;
+
+	if (rrpc->krqd_wq)
+		destroy_workqueue(rrpc->krqd_wq);
+
+	if (rrpc->kgc_wq)
+		destroy_workqueue(rrpc->kgc_wq);
+
+	if (!rrpc->luns)
+		return;
+
+	for (i = 0; i < rrpc->nr_luns; i++) {
+		rlun = &rrpc->luns[i];
+
+		if (!rlun->blocks)
+			break;
+		vfree(rlun->blocks);
+	}
+}
+
+static int rrpc_gc_init(struct rrpc *rrpc)
+{
+	rrpc->krqd_wq = alloc_workqueue("rrpc-lun", WQ_MEM_RECLAIM|WQ_UNBOUND,
+						rrpc->nr_luns);
+	if (!rrpc->krqd_wq)
+		return -ENOMEM;
+
+	rrpc->kgc_wq = alloc_workqueue("rrpc-bg", WQ_MEM_RECLAIM, 1);
+	if (!rrpc->kgc_wq)
+		return -ENOMEM;
+
+	setup_timer(&rrpc->gc_timer, rrpc_gc_timer, (unsigned long)rrpc);
+
+	return 0;
+}
+
+static void rrpc_map_free(struct rrpc *rrpc)
+{
+	vfree(rrpc->rev_trans_map);
+	vfree(rrpc->trans_map);
+}
+
+static int rrpc_l2p_update(u64 slba, u64 nlb, u64 *entries, void *private)
+{
+	struct rrpc *rrpc = (struct rrpc *)private;
+	struct nvm_dev *dev = rrpc->dev;
+	struct rrpc_addr *addr = rrpc->trans_map + slba;
+	struct rrpc_rev_addr *raddr = rrpc->rev_trans_map;
+	sector_t max_pages = dev->total_pages * (dev->sector_size >> 9);
+	u64 elba = slba + nlb;
+	u64 i;
+
+	if (unlikely(elba > dev->total_pages)) {
+		pr_err("nvm: L2P data from device is out of bounds!\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < nlb; i++) {
+		u64 pba = le64_to_cpu(entries[i]);
+		/* LNVM treats address-spaces as silos, LBA and PBA are
+		 * equally large and zero-indexed. */
+		if (unlikely(pba >= max_pages && pba != U64_MAX)) {
+			pr_err("nvm: L2P data entry is out of bounds!\n");
+			return -EINVAL;
+		}
+
+		/* Address zero is a special one. The first page on a disk is
+		 * protected. As it often holds internal device boot
+		 * information. */
+		if (!pba)
+			continue;
+
+		addr[i].addr = pba;
+		raddr[pba].addr = slba + i;
+	}
+
+	return 0;
+}
+
+static int rrpc_map_init(struct rrpc *rrpc)
+{
+	struct nvm_dev *dev = rrpc->dev;
+	sector_t i;
+	int ret;
+
+	rrpc->trans_map = vzalloc(sizeof(struct rrpc_addr) * rrpc->nr_pages);
+	if (!rrpc->trans_map)
+		return -ENOMEM;
+
+	rrpc->rev_trans_map = vmalloc(sizeof(struct rrpc_rev_addr)
+							* rrpc->nr_pages);
+	if (!rrpc->rev_trans_map)
+		return -ENOMEM;
+
+	for (i = 0; i < rrpc->nr_pages; i++) {
+		struct rrpc_addr *p = &rrpc->trans_map[i];
+		struct rrpc_rev_addr *r = &rrpc->rev_trans_map[i];
+
+		p->addr = ADDR_EMPTY;
+		r->addr = ADDR_EMPTY;
+	}
+
+	if (!dev->ops->get_l2p_tbl)
+		return 0;
+
+	/* Bring up the mapping table from device */
+	ret = dev->ops->get_l2p_tbl(dev->q, 0, dev->total_pages,
+							rrpc_l2p_update, rrpc);
+	if (ret) {
+		pr_err("nvm: rrpc: could not read L2P table.\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+
+/* Minimum pages needed within a lun */
+#define PAGE_POOL_SIZE 16
+#define ADDR_POOL_SIZE 64
+
+static int rrpc_core_init(struct rrpc *rrpc)
+{
+	down_write(&rrpc_lock);
+	if (!rrpc_gcb_cache) {
+		rrpc_gcb_cache = kmem_cache_create("rrpc_gcb",
+				sizeof(struct rrpc_block_gc), 0, 0, NULL);
+		if (!rrpc_gcb_cache) {
+			up_write(&rrpc_lock);
+			return -ENOMEM;
+		}
+
+		rrpc_rq_cache = kmem_cache_create("rrpc_rq",
+				sizeof(struct nvm_rq) + sizeof(struct rrpc_rq),
+				0, 0, NULL);
+		if (!rrpc_rq_cache) {
+			kmem_cache_destroy(rrpc_gcb_cache);
+			up_write(&rrpc_lock);
+			return -ENOMEM;
+		}
+	}
+	up_write(&rrpc_lock);
+
+	rrpc->page_pool = mempool_create_page_pool(PAGE_POOL_SIZE, 0);
+	if (!rrpc->page_pool)
+		return -ENOMEM;
+
+	rrpc->gcb_pool = mempool_create_slab_pool(rrpc->dev->nr_luns,
+								rrpc_gcb_cache);
+	if (!rrpc->gcb_pool)
+		return -ENOMEM;
+
+	rrpc->rq_pool = mempool_create_slab_pool(64, rrpc_rq_cache);
+	if (!rrpc->rq_pool)
+		return -ENOMEM;
+
+	spin_lock_init(&rrpc->inflights.lock);
+	INIT_LIST_HEAD(&rrpc->inflights.reqs);
+
+	return 0;
+}
+
+static void rrpc_core_free(struct rrpc *rrpc)
+{
+	if (rrpc->page_pool)
+		mempool_destroy(rrpc->page_pool);
+	if (rrpc->gcb_pool)
+		mempool_destroy(rrpc->gcb_pool);
+	if (rrpc->rq_pool)
+		mempool_destroy(rrpc->rq_pool);
+}
+
+static void rrpc_luns_free(struct rrpc *rrpc)
+{
+	kfree(rrpc->luns);
+}
+
+static int rrpc_luns_init(struct rrpc *rrpc, int lun_begin, int lun_end)
+{
+	struct nvm_dev *dev = rrpc->dev;
+	struct nvm_lun *luns;
+	struct rrpc_lun *rlun;
+	int i, j;
+
+	spin_lock_init(&rrpc->rev_lock);
+
+	luns = dev->bm->get_luns(dev, lun_begin, lun_end);
+	if (!luns)
+		return -EINVAL;
+
+	rrpc->luns = kcalloc(rrpc->nr_luns, sizeof(struct rrpc_lun),
+								GFP_KERNEL);
+	if (!rrpc->luns)
+		return -ENOMEM;
+
+	/* 1:1 mapping */
+	for (i = 0; i < rrpc->nr_luns; i++) {
+		struct nvm_lun *lun = &luns[i];
+
+		if (lun->nr_pages_per_blk >
+				MAX_INVALID_PAGES_STORAGE * BITS_PER_LONG) {
+			pr_err("rrpc: number of pages per block too high.");
+			goto err;
+		}
+
+		rlun = &rrpc->luns[i];
+		rlun->rrpc = rrpc;
+		rlun->parent = lun;
+		INIT_LIST_HEAD(&rlun->prio_list);
+		INIT_WORK(&rlun->ws_gc, rrpc_lun_gc);
+		spin_lock_init(&rlun->lock);
+
+		rrpc->total_blocks += lun->nr_blocks;
+		rrpc->nr_pages += lun->nr_blocks * lun->nr_pages_per_blk;
+
+		rlun->blocks = vzalloc(sizeof(struct rrpc_block) *
+						 lun->nr_blocks);
+		if (!rlun->blocks)
+			goto err;
+
+		for (j = 0; j < lun->nr_blocks; j++) {
+			struct rrpc_block *rblk = &rlun->blocks[j];
+			struct nvm_block *blk = &lun->blocks[j];
+
+			rblk->parent = blk;
+			INIT_LIST_HEAD(&rblk->prio);
+			spin_lock_init(&rblk->lock);
+		}
+	}
+
+	return 0;
+err:
+	return -ENOMEM;
+}
+
+static void rrpc_free(struct rrpc *rrpc)
+{
+	rrpc_gc_free(rrpc);
+	rrpc_map_free(rrpc);
+	rrpc_core_free(rrpc);
+	rrpc_luns_free(rrpc);
+
+	kfree(rrpc);
+}
+
+static void rrpc_exit(void *private)
+{
+	struct rrpc *rrpc = private;
+
+	del_timer(&rrpc->gc_timer);
+
+	flush_workqueue(rrpc->krqd_wq);
+	flush_workqueue(rrpc->kgc_wq);
+
+	rrpc_free(rrpc);
+}
+
+static sector_t rrpc_capacity(void *private)
+{
+	struct rrpc *rrpc = private;
+	struct nvm_dev *dev = rrpc->dev;
+	sector_t reserved;
+
+	/* cur, gc, and two emergency blocks for each lun */
+	reserved = rrpc->nr_luns * dev->max_pages_per_blk * 4;
+
+	if (reserved > rrpc->nr_pages) {
+		pr_err("rrpc: not enough space available to expose storage.\n");
+		return 0;
+	}
+
+	return ((rrpc->nr_pages - reserved) / 10) * 9 * NR_PHY_IN_LOG;
+}
+
+/*
+ * Looks up the logical address from reverse trans map and check if its valid by
+ * comparing the logical to physical address with the physical address.
+ * Returns 0 on free, otherwise 1 if in use
+ */
+static void rrpc_block_map_update(struct rrpc *rrpc, struct rrpc_block *rblk)
+{
+	struct nvm_lun *lun = rblk->parent->lun;
+	int offset;
+	struct rrpc_addr *laddr;
+	sector_t paddr, pladdr;
+
+	for (offset = 0; offset < lun->nr_pages_per_blk; offset++) {
+		paddr = block_to_addr(rblk) + offset;
+
+		pladdr = rrpc->rev_trans_map[paddr].addr;
+		if (pladdr == ADDR_EMPTY)
+			continue;
+
+		laddr = &rrpc->trans_map[pladdr];
+
+		if (paddr == laddr->addr) {
+			laddr->rblk = rblk;
+		} else {
+			set_bit(offset, rblk->invalid_pages);
+			rblk->nr_invalid_pages++;
+		}
+	}
+}
+
+static int rrpc_blocks_init(struct rrpc *rrpc)
+{
+	struct rrpc_lun *rlun;
+	struct rrpc_block *rblk;
+	int lun_iter, blk_iter;
+
+	for (lun_iter = 0; lun_iter < rrpc->nr_luns; lun_iter++) {
+		rlun = &rrpc->luns[lun_iter];
+
+		for (blk_iter = 0; blk_iter < rlun->parent->nr_blocks;
+								blk_iter++) {
+			rblk = &rlun->blocks[blk_iter];
+			rrpc_block_map_update(rrpc, rblk);
+		}
+	}
+
+	return 0;
+}
+
+static int rrpc_luns_configure(struct rrpc *rrpc)
+{
+	struct rrpc_lun *rlun;
+	struct rrpc_block *rblk;
+	int i;
+
+	for (i = 0; i < rrpc->nr_luns; i++) {
+		rlun = &rrpc->luns[i];
+
+		rblk = rrpc_get_blk(rrpc, rlun, 0);
+		if (!rblk)
+			return -EINVAL;
+
+		rrpc_set_lun_cur(rlun, rblk);
+
+		/* Emergency gc block */
+		rblk = rrpc_get_blk(rrpc, rlun, 1);
+		if (!rblk)
+			return -EINVAL;
+		rlun->gc_cur = rblk;
+	}
+
+	return 0;
+}
+
+static struct nvm_tgt_type tt_rrpc;
+
+static void *rrpc_init(struct nvm_dev *dev, struct gendisk *tdisk,
+						int lun_begin, int lun_end)
+{
+	struct request_queue *bqueue = dev->q;
+	struct request_queue *tqueue = tdisk->queue;
+	struct rrpc *rrpc;
+	int ret;
+
+	rrpc = kzalloc(sizeof(struct rrpc), GFP_KERNEL);
+	if (!rrpc) {
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	rrpc->instance.tt = &tt_rrpc;
+	rrpc->dev = dev;
+	rrpc->disk = tdisk;
+
+	bio_list_init(&rrpc->requeue_bios);
+	spin_lock_init(&rrpc->bio_lock);
+	INIT_WORK(&rrpc->ws_requeue, rrpc_requeue);
+
+	rrpc->nr_luns = lun_end - lun_begin + 1;
+
+	/* simple round-robin strategy */
+	atomic_set(&rrpc->next_lun, -1);
+
+	ret = rrpc_luns_init(rrpc, lun_begin, lun_end);
+	if (ret) {
+		pr_err("nvm: could not initialize luns\n");
+		goto err;
+	}
+
+	rrpc->poffset = rrpc->luns[0].parent->nr_blocks *
+			rrpc->luns[0].parent->nr_pages_per_blk * lun_begin;
+	rrpc->lun_offset = lun_begin;
+
+	ret = rrpc_core_init(rrpc);
+	if (ret) {
+		pr_err("nvm: rrpc: could not initialize core\n");
+		goto err;
+	}
+
+	ret = rrpc_map_init(rrpc);
+	if (ret) {
+		pr_err("nvm: rrpc: could not initialize maps\n");
+		goto err;
+	}
+
+	ret = rrpc_blocks_init(rrpc);
+	if (ret) {
+		pr_err("nvm: rrpc: could not initialize state for blocks\n");
+		goto err;
+	}
+
+	ret = rrpc_luns_configure(rrpc);
+	if (ret) {
+		pr_err("nvm: rrpc: not enough blocks available in LUNs.\n");
+		goto err;
+	}
+
+	ret = rrpc_gc_init(rrpc);
+	if (ret) {
+		pr_err("nvm: rrpc: could not initialize gc\n");
+		goto err;
+	}
+
+	/* inherit the size from the underlying device */
+	blk_queue_logical_block_size(tqueue, queue_physical_block_size(bqueue));
+	blk_queue_max_hw_sectors(tqueue, queue_max_hw_sectors(bqueue));
+
+	pr_info("nvm: rrpc initialized with %u luns and %llu pages.\n",
+			rrpc->nr_luns, (unsigned long long)rrpc->nr_pages);
+
+	mod_timer(&rrpc->gc_timer, jiffies + msecs_to_jiffies(10));
+
+	return rrpc;
+err:
+	rrpc_free(rrpc);
+	return ERR_PTR(ret);
+}
+
+/* round robin, page-based FTL, and cost-based GC */
+static struct nvm_tgt_type tt_rrpc = {
+	.name		= "rrpc",
+
+	.make_rq	= rrpc_make_rq,
+	.capacity	= rrpc_capacity,
+	.end_io		= rrpc_end_io,
+
+	.init		= rrpc_init,
+	.exit		= rrpc_exit,
+};
+
+static int __init rrpc_module_init(void)
+{
+	return nvm_register_target(&tt_rrpc);
+}
+
+static void rrpc_module_exit(void)
+{
+	nvm_unregister_target(&tt_rrpc);
+}
+
+module_init(rrpc_module_init);
+module_exit(rrpc_module_exit);
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Hybrid Target for Open-Channel SSDs");
diff --git a/drivers/lightnvm/rrpc.h b/drivers/lightnvm/rrpc.h
new file mode 100644
index 0000000..706ba0f
--- /dev/null
+++ b/drivers/lightnvm/rrpc.h
@@ -0,0 +1,236 @@
+/*
+ * Copyright (C) 2015 IT University of Copenhagen
+ * Initial release: Matias Bjorling <mabj@itu.dk>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Implementation of a Round-robin page-based Hybrid FTL for Open-channel SSDs.
+ */
+
+#ifndef RRPC_H_
+#define RRPC_H_
+
+#include <linux/blkdev.h>
+#include <linux/blk-mq.h>
+#include <linux/bio.h>
+#include <linux/module.h>
+#include <linux/kthread.h>
+#include <linux/vmalloc.h>
+
+#include <linux/lightnvm.h>
+
+/* Run only GC if less than 1/X blocks are free */
+#define GC_LIMIT_INVERSE 10
+#define GC_TIME_SECS 100
+
+#define RRPC_SECTOR (512)
+#define RRPC_EXPOSED_PAGE_SIZE (4096)
+
+#define NR_PHY_IN_LOG (RRPC_EXPOSED_PAGE_SIZE / RRPC_SECTOR)
+
+struct rrpc_inflight {
+	struct list_head reqs;
+	spinlock_t lock;
+};
+
+struct rrpc_inflight_rq {
+	struct list_head list;
+	sector_t l_start;
+	sector_t l_end;
+};
+
+struct rrpc_rq {
+	struct rrpc_inflight_rq inflight_rq;
+	struct rrpc_addr *addr;
+	unsigned long flags;
+};
+
+struct rrpc_block {
+	struct nvm_block *parent;
+	struct list_head prio;
+
+#define MAX_INVALID_PAGES_STORAGE 8
+	/* Bitmap for invalid page intries */
+	unsigned long invalid_pages[MAX_INVALID_PAGES_STORAGE];
+	/* points to the next writable page within a block */
+	unsigned int next_page;
+	/* number of pages that are invalid, wrt host page size */
+	unsigned int nr_invalid_pages;
+
+	spinlock_t lock;
+	atomic_t data_cmnt_size; /* data pages committed to stable storage */
+};
+
+struct rrpc_lun {
+	struct rrpc *rrpc;
+	struct nvm_lun *parent;
+	struct rrpc_block *cur, *gc_cur;
+	struct rrpc_block *blocks;	/* Reference to block allocation */
+	struct list_head prio_list;		/* Blocks that may be GC'ed */
+	struct work_struct ws_gc;
+
+	spinlock_t lock;
+};
+
+struct rrpc {
+	/* instance must be kept in top to resolve rrpc in unprep */
+	struct nvm_tgt_instance instance;
+
+	struct nvm_dev *dev;
+	struct gendisk *disk;
+
+	sector_t poffset; /* physical page offset */
+	int lun_offset;
+
+	int nr_luns;
+	struct rrpc_lun *luns;
+
+	/* calculated values */
+	unsigned long nr_pages;
+	unsigned long total_blocks;
+
+	/* Write strategy variables. Move these into each for structure for each
+	 * strategy */
+	atomic_t next_lun; /* Whenever a page is written, this is updated
+			    * to point to the next write lun */
+
+	spinlock_t bio_lock;
+	struct bio_list requeue_bios;
+	struct work_struct ws_requeue;
+
+	/* Simple translation map of logical addresses to physical addresses.
+	 * The logical addresses is known by the host system, while the physical
+	 * addresses are used when writing to the disk block device. */
+	struct rrpc_addr *trans_map;
+	/* also store a reverse map for garbage collection */
+	struct rrpc_rev_addr *rev_trans_map;
+	spinlock_t rev_lock;
+
+	struct rrpc_inflight inflights;
+
+	mempool_t *addr_pool;
+	mempool_t *page_pool;
+	mempool_t *gcb_pool;
+	mempool_t *rq_pool;
+
+	struct timer_list gc_timer;
+	struct workqueue_struct *krqd_wq;
+	struct workqueue_struct *kgc_wq;
+};
+
+struct rrpc_block_gc {
+	struct rrpc *rrpc;
+	struct rrpc_block *rblk;
+	struct work_struct ws_gc;
+};
+
+/* Logical to physical mapping */
+struct rrpc_addr {
+	sector_t addr;
+	struct rrpc_block *rblk;
+};
+
+/* Physical to logical mapping */
+struct rrpc_rev_addr {
+	sector_t addr;
+};
+
+static inline sector_t rrpc_get_laddr(struct bio *bio)
+{
+	return bio->bi_iter.bi_sector / NR_PHY_IN_LOG;
+}
+
+static inline unsigned int rrpc_get_pages(struct bio *bio)
+{
+	return  bio->bi_iter.bi_size / RRPC_EXPOSED_PAGE_SIZE;
+}
+
+static inline sector_t rrpc_get_sector(sector_t laddr)
+{
+	return laddr * NR_PHY_IN_LOG;
+}
+
+static inline int request_intersects(struct rrpc_inflight_rq *r,
+				sector_t laddr_start, sector_t laddr_end)
+{
+	return (laddr_end >= r->l_start && laddr_end <= r->l_end) &&
+		(laddr_start >= r->l_start && laddr_start <= r->l_end);
+}
+
+static int __rrpc_lock_laddr(struct rrpc *rrpc, sector_t laddr,
+			     unsigned pages, struct rrpc_inflight_rq *r)
+{
+	sector_t laddr_end = laddr + pages - 1;
+	struct rrpc_inflight_rq *rtmp;
+
+	spin_lock_irq(&rrpc->inflights.lock);
+	list_for_each_entry(rtmp, &rrpc->inflights.reqs, list) {
+		if (unlikely(request_intersects(rtmp, laddr, laddr_end))) {
+			/* existing, overlapping request, come back later */
+			spin_unlock_irq(&rrpc->inflights.lock);
+			return 1;
+		}
+	}
+
+	r->l_start = laddr;
+	r->l_end = laddr_end;
+
+	list_add_tail(&r->list, &rrpc->inflights.reqs);
+	spin_unlock_irq(&rrpc->inflights.lock);
+	return 0;
+}
+
+static inline int rrpc_lock_laddr(struct rrpc *rrpc, sector_t laddr,
+				 unsigned pages,
+				 struct rrpc_inflight_rq *r)
+{
+	BUG_ON((laddr + pages) > rrpc->nr_pages);
+
+	return __rrpc_lock_laddr(rrpc, laddr, pages, r);
+}
+
+static inline struct rrpc_inflight_rq *rrpc_get_inflight_rq(struct nvm_rq *rqd)
+{
+	struct rrpc_rq *rrqd = nvm_rq_to_pdu(rqd);
+
+	return &rrqd->inflight_rq;
+}
+
+static inline int rrpc_lock_rq(struct rrpc *rrpc, struct bio *bio,
+							struct nvm_rq *rqd)
+{
+	sector_t laddr = rrpc_get_laddr(bio);
+	unsigned int pages = rrpc_get_pages(bio);
+	struct rrpc_inflight_rq *r = rrpc_get_inflight_rq(rqd);
+
+	return rrpc_lock_laddr(rrpc, laddr, pages, r);
+}
+
+static inline void rrpc_unlock_laddr(struct rrpc *rrpc,
+						struct rrpc_inflight_rq *r)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&rrpc->inflights.lock, flags);
+	list_del_init(&r->list);
+	spin_unlock_irqrestore(&rrpc->inflights.lock, flags);
+}
+
+static inline void rrpc_unlock_rq(struct rrpc *rrpc, struct nvm_rq *rqd)
+{
+	struct rrpc_inflight_rq *r = rrpc_get_inflight_rq(rqd);
+	uint8_t pages = rqd->npages;
+
+	BUG_ON((r->l_start + pages) > rrpc->nr_pages);
+
+	rrpc_unlock_laddr(rrpc, r);
+}
+
+#endif /* RRPC_H_ */
diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
index 9654354..0ac73d5 100644
--- a/include/linux/lightnvm.h
+++ b/include/linux/lightnvm.h
@@ -135,6 +135,7 @@ struct nvm_dev_ops {
 	nvm_alloc_ppalist_fn	*alloc_ppalist;
 	nvm_free_ppalist_fn	*free_ppalist;
 
+	int			dev_sector_size;
 	uint8_t			max_phys_sect;
 };
 
@@ -286,10 +287,8 @@ extern int nvm_submit_io(struct nvm_dev *, struct nvm_rq *);
  * bytes chunks. This should be set to the smallest command size available for a
  * given device.
  */
-#define NVM_SECTOR (512)
-#define EXPOSED_PAGE_SIZE (4096)
 
-#define NR_PHY_IN_LOG (EXPOSED_PAGE_SIZE / NVM_SECTOR)
+#define DEV_EXPOSED_PAGE_SIZE (4096)
 
 #define NVM_MSG_PREFIX "nvm"
 #define ADDR_EMPTY (~0ULL)
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v7 2/5] lightnvm: Hybrid Open-Channel SSD RRPC target
@ 2015-08-07 14:29   ` Matias Bjørling
  0 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-08-07 14:29 UTC (permalink / raw)
  To: hch, axboe, linux-fsdevel, linux-kernel, linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

This target implements a simple strategy FTL for Open-Channel SSDs.
It does round-robin selection across channels and luns. It uses a
simple greedy cost-based garbage collector and exposes the physical
flash as a block device.

Signed-off-by: Javier González <jg@lightnvm.io>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
---
 drivers/lightnvm/Kconfig  |   10 +
 drivers/lightnvm/Makefile |    1 +
 drivers/lightnvm/core.c   |    3 +-
 drivers/lightnvm/rrpc.c   | 1296 +++++++++++++++++++++++++++++++++++++++++++++
 drivers/lightnvm/rrpc.h   |  236 +++++++++
 include/linux/lightnvm.h  |    5 +-
 6 files changed, 1547 insertions(+), 4 deletions(-)
 create mode 100644 drivers/lightnvm/rrpc.c
 create mode 100644 drivers/lightnvm/rrpc.h

diff --git a/drivers/lightnvm/Kconfig b/drivers/lightnvm/Kconfig
index 1f8412c..ab1fe57 100644
--- a/drivers/lightnvm/Kconfig
+++ b/drivers/lightnvm/Kconfig
@@ -14,3 +14,13 @@ menuconfig NVM
 	  If you say N, all options in this submenu will be skipped and disabled
 	  only do this if you know what you are doing.
 
+if NVM
+
+config NVM_RRPC
+	tristate "Round-robin Hybrid Open-Channel SSD target"
+	---help---
+	Allows an open-channel SSD to be exposed as a block device to the
+	host. The target is implemented using a linear mapping table and
+	cost-based garbage collection. It is optimized for 4K IO sizes.
+
+endif # NVM
diff --git a/drivers/lightnvm/Makefile b/drivers/lightnvm/Makefile
index 38185e9..b2a39e2 100644
--- a/drivers/lightnvm/Makefile
+++ b/drivers/lightnvm/Makefile
@@ -3,3 +3,4 @@
 #
 
 obj-$(CONFIG_NVM)		:= core.o
+obj-$(CONFIG_NVM_RRPC)		+= rrpc.o
diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 6499922..5e4c2b8 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -169,7 +169,7 @@ static void nvm_core_free(struct nvm_dev *dev)
 static int nvm_core_init(struct nvm_dev *dev)
 {
 	dev->nr_luns = dev->identity.nchannels;
-	dev->sector_size = EXPOSED_PAGE_SIZE;
+	dev->sector_size = dev->ops->dev_sector_size;
 	INIT_LIST_HEAD(&dev->online_targets);
 
 	return 0;
@@ -541,6 +541,7 @@ int nvm_register(struct request_queue *q, char *disk_name,
 
 	dev->q = q;
 	dev->ops = ops;
+	dev->ops->dev_sector_size = DEV_EXPOSED_PAGE_SIZE;
 	strncpy(dev->name, disk_name, DISK_NAME_LEN);
 
 	ret = nvm_init(dev);
diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c
new file mode 100644
index 0000000..5843383
--- /dev/null
+++ b/drivers/lightnvm/rrpc.c
@@ -0,0 +1,1296 @@
+/*
+ * Copyright (C) 2015 IT University of Copenhagen
+ * Initial release: Matias Bjorling <mabj@itu.dk>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Implementation of a Round-robin page-based Hybrid FTL for Open-channel SSDs.
+ */
+
+#include "rrpc.h"
+
+static struct kmem_cache *rrpc_gcb_cache, *rrpc_rq_cache;
+static DECLARE_RWSEM(rrpc_lock);
+
+static int rrpc_submit_io(struct rrpc *rrpc, struct bio *bio,
+				struct nvm_rq *rqd, unsigned long flags);
+
+#define rrpc_for_each_lun(rrpc, rlun, i) \
+		for ((i) = 0, rlun = &(rrpc)->luns[0]; \
+			(i) < (rrpc)->nr_luns; (i)++, rlun = &(rrpc)->luns[(i)])
+
+static void rrpc_page_invalidate(struct rrpc *rrpc, struct rrpc_addr *a)
+{
+	struct rrpc_block *rblk = a->rblk;
+	unsigned int pg_offset;
+
+	lockdep_assert_held(&rrpc->rev_lock);
+
+	if (a->addr == ADDR_EMPTY || !rblk)
+		return;
+
+	spin_lock(&rblk->lock);
+
+	pg_offset = a->addr % rblk->parent->lun->nr_pages_per_blk;
+	WARN_ON(test_and_set_bit(pg_offset, rblk->invalid_pages));
+	rblk->nr_invalid_pages++;
+
+	spin_unlock(&rblk->lock);
+
+	rrpc->rev_trans_map[a->addr - rrpc->poffset].addr = ADDR_EMPTY;
+}
+
+static void rrpc_invalidate_range(struct rrpc *rrpc, sector_t slba,
+								unsigned len)
+{
+	sector_t i;
+
+	spin_lock(&rrpc->rev_lock);
+	for (i = slba; i < slba + len; i++) {
+		struct rrpc_addr *gp = &rrpc->trans_map[i];
+
+		rrpc_page_invalidate(rrpc, gp);
+		gp->rblk = NULL;
+	}
+	spin_unlock(&rrpc->rev_lock);
+}
+
+static struct nvm_rq *rrpc_inflight_laddr_acquire(struct rrpc *rrpc,
+					sector_t laddr, unsigned int pages)
+{
+	struct nvm_rq *rqd;
+	struct rrpc_inflight_rq *inf;
+
+	rqd = mempool_alloc(rrpc->rq_pool, GFP_ATOMIC);
+	if (!rqd)
+		return ERR_PTR(-ENOMEM);
+
+	inf = rrpc_get_inflight_rq(rqd);
+	if (rrpc_lock_laddr(rrpc, laddr, pages, inf)) {
+		mempool_free(rqd, rrpc->rq_pool);
+		return NULL;
+	}
+
+	return rqd;
+}
+
+static void rrpc_inflight_laddr_release(struct rrpc *rrpc, struct nvm_rq *rqd)
+{
+	struct rrpc_inflight_rq *inf = rrpc_get_inflight_rq(rqd);
+
+	rrpc_unlock_laddr(rrpc, inf);
+
+	mempool_free(rqd, rrpc->rq_pool);
+}
+
+static void rrpc_discard(struct rrpc *rrpc, struct bio *bio)
+{
+	sector_t slba = bio->bi_iter.bi_sector / NR_PHY_IN_LOG;
+	sector_t len = bio->bi_iter.bi_size / RRPC_EXPOSED_PAGE_SIZE;
+	struct nvm_rq *rqd;
+
+	do {
+		rqd = rrpc_inflight_laddr_acquire(rrpc, slba, len);
+		schedule();
+	} while (!rqd);
+
+	if (IS_ERR(rqd)) {
+		pr_err("rrpc: unable to acquire inflight IO\n");
+		bio_io_error(bio);
+		return;
+	}
+
+	rrpc_invalidate_range(rrpc, slba, len);
+	rrpc_inflight_laddr_release(rrpc, rqd);
+}
+
+static int block_is_full(struct rrpc_lun *rlun, struct rrpc_block *rblk)
+{
+	struct nvm_lun *lun = rlun->parent;
+
+	return (rblk->next_page == lun->nr_pages_per_blk);
+}
+
+static sector_t block_to_addr(struct rrpc_block *rblk)
+{
+	struct nvm_block *blk = rblk->parent;
+	struct nvm_lun *lun = rblk->parent->lun;
+
+	return blk->id * lun->nr_pages_per_blk;
+}
+
+/* requires lun->lock taken */
+static void rrpc_set_lun_cur(struct rrpc_lun *rlun, struct rrpc_block *rblk)
+{
+	BUG_ON(!rblk);
+
+	if (rlun->cur) {
+		spin_lock(&rlun->cur->lock);
+		WARN_ON(!block_is_full(rlun, rlun->cur));
+		spin_unlock(&rlun->cur->lock);
+	}
+	rlun->cur = rblk;
+}
+
+static struct rrpc_block *rrpc_get_blk(struct rrpc *rrpc, struct rrpc_lun *rlun,
+							unsigned long flags)
+{
+	struct nvm_block *blk;
+	struct rrpc_block *rblk;
+
+	blk = nvm_get_blk(rrpc->dev, rlun->parent, 0);
+	if (!blk)
+		return NULL;
+
+	rblk = &rlun->blocks[blk->id];
+	blk->priv = rblk;
+
+	bitmap_zero(rblk->invalid_pages, rlun->parent->nr_pages_per_blk);
+	rblk->next_page = 0;
+	rblk->nr_invalid_pages = 0;
+	atomic_set(&rblk->data_cmnt_size, 0);
+
+	return rblk;
+}
+
+static void rrpc_put_blk(struct rrpc *rrpc, struct rrpc_block *rblk)
+{
+	nvm_put_blk(rrpc->dev, rblk->parent);
+}
+
+static struct rrpc_lun *get_next_lun(struct rrpc *rrpc)
+{
+	int next = atomic_inc_return(&rrpc->next_lun);
+
+	return &rrpc->luns[next % rrpc->nr_luns];
+}
+
+static void rrpc_gc_kick(struct rrpc *rrpc)
+{
+	struct rrpc_lun *rlun;
+	unsigned int i;
+
+	for (i = 0; i < rrpc->nr_luns; i++) {
+		rlun = &rrpc->luns[i];
+		queue_work(rrpc->krqd_wq, &rlun->ws_gc);
+	}
+}
+
+/*
+ * timed GC every interval.
+ */
+static void rrpc_gc_timer(unsigned long data)
+{
+	struct rrpc *rrpc = (struct rrpc *)data;
+
+	rrpc_gc_kick(rrpc);
+	mod_timer(&rrpc->gc_timer, jiffies + msecs_to_jiffies(10));
+}
+
+static void rrpc_end_sync_bio(struct bio *bio, int error)
+{
+	struct completion *waiting = bio->bi_private;
+
+	if (error)
+		pr_err("nvm: gc request failed (%u).\n", error);
+
+	complete(waiting);
+}
+
+/*
+ * rrpc_move_valid_pages -- migrate live data off the block
+ * @rrpc: the 'rrpc' structure
+ * @block: the block from which to migrate live pages
+ *
+ * Description:
+ *   GC algorithms may call this function to migrate remaining live
+ *   pages off the block prior to erasing it. This function blocks
+ *   further execution until the operation is complete.
+ */
+static int rrpc_move_valid_pages(struct rrpc *rrpc, struct rrpc_block *rblk)
+{
+	struct request_queue *q = rrpc->dev->q;
+	struct rrpc_rev_addr *rev;
+	struct nvm_rq *rqd;
+	struct bio *bio;
+	struct page *page;
+	int slot;
+	int nr_pgs_per_blk = rblk->parent->lun->nr_pages_per_blk;
+	sector_t phys_addr;
+	DECLARE_COMPLETION_ONSTACK(wait);
+
+	if (bitmap_full(rblk->invalid_pages, nr_pgs_per_blk))
+		return 0;
+
+	bio = bio_alloc(GFP_NOIO, 1);
+	if (!bio) {
+		pr_err("nvm: could not alloc bio to gc\n");
+		return -ENOMEM;
+	}
+
+	page = mempool_alloc(rrpc->page_pool, GFP_NOIO);
+
+	while ((slot = find_first_zero_bit(rblk->invalid_pages,
+					    nr_pgs_per_blk)) < nr_pgs_per_blk) {
+
+		/* Lock laddr */
+		phys_addr = (rblk->parent->id * nr_pgs_per_blk) + slot;
+
+try:
+		spin_lock(&rrpc->rev_lock);
+		/* Get logical address from physical to logical table */
+		rev = &rrpc->rev_trans_map[phys_addr - rrpc->poffset];
+		/* already updated by previous regular write */
+		if (rev->addr == ADDR_EMPTY) {
+			spin_unlock(&rrpc->rev_lock);
+			continue;
+		}
+
+		rqd = rrpc_inflight_laddr_acquire(rrpc, rev->addr, 1);
+		if (IS_ERR_OR_NULL(rqd)) {
+			spin_unlock(&rrpc->rev_lock);
+			schedule();
+			goto try;
+		}
+
+		spin_unlock(&rrpc->rev_lock);
+
+		/* Perform read to do GC */
+		bio->bi_iter.bi_sector = rrpc_get_sector(rev->addr);
+		bio->bi_rw = READ;
+		bio->bi_private = &wait;
+		bio->bi_end_io = rrpc_end_sync_bio;
+
+		/* TODO: may fail when EXP_PG_SIZE > PAGE_SIZE */
+		bio_add_pc_page(q, bio, page, RRPC_EXPOSED_PAGE_SIZE, 0);
+
+		if (rrpc_submit_io(rrpc, bio, rqd, NVM_IOTYPE_GC)) {
+			pr_err("rrpc: gc read failed.\n");
+			rrpc_inflight_laddr_release(rrpc, rqd);
+			goto finished;
+		}
+		wait_for_completion_io(&wait);
+
+		bio_reset(bio);
+		reinit_completion(&wait);
+
+		bio->bi_iter.bi_sector = rrpc_get_sector(rev->addr);
+		bio->bi_rw = WRITE;
+		bio->bi_private = &wait;
+		bio->bi_end_io = rrpc_end_sync_bio;
+
+		bio_add_pc_page(q, bio, page, RRPC_EXPOSED_PAGE_SIZE, 0);
+
+		/* turn the command around and write the data back to a new
+		 * address */
+		if (rrpc_submit_io(rrpc, bio, rqd, NVM_IOTYPE_GC)) {
+			pr_err("rrpc: gc write failed.\n");
+			rrpc_inflight_laddr_release(rrpc, rqd);
+			goto finished;
+		}
+		wait_for_completion_io(&wait);
+
+		rrpc_inflight_laddr_release(rrpc, rqd);
+
+		bio_reset(bio);
+	}
+
+finished:
+	mempool_free(page, rrpc->page_pool);
+	bio_put(bio);
+
+	if (!bitmap_full(rblk->invalid_pages, nr_pgs_per_blk)) {
+		pr_err("nvm: failed to garbage collect block\n");
+		return -EIO;
+	}
+
+	return 0;
+}
+
+static void rrpc_block_gc(struct work_struct *work)
+{
+	struct rrpc_block_gc *gcb = container_of(work, struct rrpc_block_gc,
+									ws_gc);
+	struct rrpc *rrpc = gcb->rrpc;
+	struct rrpc_block *rblk = gcb->rblk;
+	struct nvm_dev *dev = rrpc->dev;
+
+	pr_debug("nvm: block '%llu' being reclaimed\n", rblk->parent->id);
+
+	if (rrpc_move_valid_pages(rrpc, rblk))
+		goto done;
+
+	nvm_erase_blk(dev, rblk->parent);
+	rrpc_put_blk(rrpc, rblk);
+done:
+	mempool_free(gcb, rrpc->gcb_pool);
+}
+
+/* the block with highest number of invalid pages, will be in the beginning
+ * of the list */
+static struct rrpc_block *rblock_max_invalid(struct rrpc_block *ra,
+							struct rrpc_block *rb)
+{
+	if (ra->nr_invalid_pages == rb->nr_invalid_pages)
+		return ra;
+
+	return (ra->nr_invalid_pages < rb->nr_invalid_pages) ? rb : ra;
+}
+
+/* linearly find the block with highest number of invalid pages
+ * requires lun->lock */
+static struct rrpc_block *block_prio_find_max(struct rrpc_lun *rlun)
+{
+	struct list_head *prio_list = &rlun->prio_list;
+	struct rrpc_block *rblock, *max;
+
+	BUG_ON(list_empty(prio_list));
+
+	max = list_first_entry(prio_list, struct rrpc_block, prio);
+	list_for_each_entry(rblock, prio_list, prio)
+		max = rblock_max_invalid(max, rblock);
+
+	return max;
+}
+
+static void rrpc_lun_gc(struct work_struct *work)
+{
+	struct rrpc_lun *rlun = container_of(work, struct rrpc_lun, ws_gc);
+	struct rrpc *rrpc = rlun->rrpc;
+	struct nvm_lun *lun = rlun->parent;
+	struct rrpc_block_gc *gcb;
+	unsigned int nr_blocks_need;
+
+	nr_blocks_need = lun->nr_blocks / GC_LIMIT_INVERSE;
+
+	if (nr_blocks_need < rrpc->nr_luns)
+		nr_blocks_need = rrpc->nr_luns;
+
+	spin_lock(&lun->lock);
+	while (nr_blocks_need > lun->nr_free_blocks &&
+					!list_empty(&rlun->prio_list)) {
+		struct rrpc_block *rblock = block_prio_find_max(rlun);
+		struct nvm_block *block = rblock->parent;
+
+		if (!rblock->nr_invalid_pages)
+			break;
+
+		list_del_init(&rblock->prio);
+
+		BUG_ON(!block_is_full(rlun, rblock));
+
+		pr_debug("rrpc: selected block '%llu' for GC\n", block->id);
+
+		gcb = mempool_alloc(rrpc->gcb_pool, GFP_ATOMIC);
+		if (!gcb)
+			break;
+
+		gcb->rrpc = rrpc;
+		gcb->rblk = rblock;
+		INIT_WORK(&gcb->ws_gc, rrpc_block_gc);
+
+		queue_work(rrpc->kgc_wq, &gcb->ws_gc);
+
+		nr_blocks_need--;
+	}
+	spin_unlock(&lun->lock);
+
+	/* TODO: Hint that request queue can be started again */
+}
+
+static void rrpc_gc_queue(struct work_struct *work)
+{
+	struct rrpc_block_gc *gcb = container_of(work, struct rrpc_block_gc,
+									ws_gc);
+	struct rrpc *rrpc = gcb->rrpc;
+	struct rrpc_block *rblk = gcb->rblk;
+	struct nvm_lun *lun = rblk->parent->lun;
+	struct rrpc_lun *rlun = &rrpc->luns[lun->id - rrpc->lun_offset];
+
+	spin_lock(&rlun->lock);
+	list_add_tail(&rblk->prio, &rlun->prio_list);
+	spin_unlock(&rlun->lock);
+
+	mempool_free(gcb, rrpc->gcb_pool);
+	pr_debug("nvm: block '%llu' is full, allow GC (sched)\n",
+							rblk->parent->id);
+}
+
+static const struct block_device_operations rrpc_fops = {
+	.owner		= THIS_MODULE,
+};
+
+static struct rrpc_lun *rrpc_get_lun_rr(struct rrpc *rrpc, int is_gc)
+{
+	unsigned int i;
+	struct rrpc_lun *rlun, *max_free;
+
+	if (!is_gc)
+		return get_next_lun(rrpc);
+
+	/* during GC, we don't care about RR, instead we want to make
+	 * sure that we maintain evenness between the block luns. */
+	max_free = &rrpc->luns[0];
+	/* prevent GC-ing lun from devouring pages of a lun with
+	 * little free blocks. We don't take the lock as we only need an
+	 * estimate. */
+	rrpc_for_each_lun(rrpc, rlun, i) {
+		if (rlun->parent->nr_free_blocks >
+					max_free->parent->nr_free_blocks)
+			max_free = rlun;
+	}
+
+	return max_free;
+}
+
+static struct rrpc_addr *rrpc_update_map(struct rrpc *rrpc, sector_t laddr,
+					struct rrpc_block *rblk, sector_t paddr)
+{
+	struct rrpc_addr *gp;
+	struct rrpc_rev_addr *rev;
+
+	BUG_ON(laddr >= rrpc->nr_pages);
+
+	gp = &rrpc->trans_map[laddr];
+	spin_lock(&rrpc->rev_lock);
+	if (gp->rblk)
+		rrpc_page_invalidate(rrpc, gp);
+
+	gp->addr = paddr;
+	gp->rblk = rblk;
+
+	rev = &rrpc->rev_trans_map[gp->addr - rrpc->poffset];
+	rev->addr = laddr;
+	spin_unlock(&rrpc->rev_lock);
+
+	return gp;
+}
+
+static sector_t rrpc_alloc_addr(struct rrpc_lun *rlun, struct rrpc_block *rblk)
+{
+	sector_t addr = ADDR_EMPTY;
+
+	spin_lock(&rblk->lock);
+	if (block_is_full(rlun, rblk))
+		goto out;
+
+	addr = block_to_addr(rblk) + rblk->next_page;
+
+	rblk->next_page++;
+out:
+	spin_unlock(&rblk->lock);
+	return addr;
+}
+
+/* Simple round-robin Logical to physical address translation.
+ *
+ * Retrieve the mapping using the active append point. Then update the ap for
+ * the next write to the disk.
+ *
+ * Returns rrpc_addr with the physical address and block. Remember to return to
+ * rrpc->addr_cache when request is finished.
+ */
+static struct rrpc_addr *rrpc_map_page(struct rrpc *rrpc, sector_t laddr,
+								int is_gc)
+{
+	struct rrpc_lun *rlun;
+	struct rrpc_block *rblk;
+	struct nvm_lun *lun;
+	sector_t paddr;
+
+	rlun = rrpc_get_lun_rr(rrpc, is_gc);
+	lun = rlun->parent;
+
+	if (!is_gc && lun->nr_free_blocks < rrpc->nr_luns * 4)
+		return NULL;
+
+	spin_lock(&rlun->lock);
+
+	rblk = rlun->cur;
+retry:
+	paddr = rrpc_alloc_addr(rlun, rblk);
+
+	if (paddr == ADDR_EMPTY) {
+		rblk = rrpc_get_blk(rrpc, rlun, 0);
+		if (rblk) {
+			rrpc_set_lun_cur(rlun, rblk);
+			goto retry;
+		}
+
+		if (is_gc) {
+			/* retry from emergency gc block */
+			paddr = rrpc_alloc_addr(rlun, rlun->gc_cur);
+			if (paddr == ADDR_EMPTY) {
+				rblk = rrpc_get_blk(rrpc, rlun, 1);
+				if (!rblk) {
+					pr_err("rrpc: no more blocks");
+					goto err;
+				}
+
+				rlun->gc_cur = rblk;
+				paddr = rrpc_alloc_addr(rlun, rlun->gc_cur);
+			}
+			rblk = rlun->gc_cur;
+		}
+	}
+
+	spin_unlock(&rlun->lock);
+	return rrpc_update_map(rrpc, laddr, rblk, paddr);
+err:
+	spin_unlock(&rlun->lock);
+	return NULL;
+}
+
+static void rrpc_run_gc(struct rrpc *rrpc, struct rrpc_block *rblk)
+{
+	struct rrpc_block_gc *gcb;
+
+	gcb = mempool_alloc(rrpc->gcb_pool, GFP_ATOMIC);
+	if (!gcb) {
+		pr_err("rrpc: unable to queue block for gc.");
+		return;
+	}
+
+	gcb->rrpc = rrpc;
+	gcb->rblk = rblk;
+
+	INIT_WORK(&gcb->ws_gc, rrpc_gc_queue);
+	queue_work(rrpc->kgc_wq, &gcb->ws_gc);
+}
+
+static void rrpc_end_io_write(struct rrpc *rrpc, struct rrpc_rq *rrqd,
+						sector_t laddr, uint8_t npages)
+{
+	struct rrpc_addr *p;
+	struct rrpc_block *rblk;
+	struct nvm_lun *lun;
+	int cmnt_size, i;
+
+	for (i = 0; i < npages; i++) {
+		p = &rrpc->trans_map[laddr + i];
+		rblk = p->rblk;
+		lun = rblk->parent->lun;
+
+		cmnt_size = atomic_inc_return(&rblk->data_cmnt_size);
+		if (unlikely(cmnt_size == lun->nr_pages_per_blk))
+			rrpc_run_gc(rrpc, rblk);
+	}
+}
+
+static void rrpc_end_io(struct nvm_rq *rqd, int error)
+{
+	struct rrpc *rrpc = container_of(rqd->ins, struct rrpc, instance);
+	struct rrpc_rq *rrqd = nvm_rq_to_pdu(rqd);
+	uint8_t npages = rqd->npages;
+	sector_t laddr = rrpc_get_laddr(rqd->bio) - npages;
+
+	if (bio_data_dir(rqd->bio) == WRITE)
+		rrpc_end_io_write(rrpc, rrqd, laddr, npages);
+
+	if (rrqd->flags & NVM_IOTYPE_GC)
+		return;
+
+	rrpc_unlock_rq(rrpc, rqd);
+	bio_put(rqd->bio);
+
+	if (npages > 1)
+		nvm_free_ppalist(rrpc->dev, rqd->ppa_list, rqd->dma_ppa_list);
+
+
+	mempool_free(rqd, rrpc->rq_pool);
+}
+
+static int rrpc_read_ppalist_rq(struct rrpc *rrpc, struct bio *bio,
+			struct nvm_rq *rqd, unsigned long flags, int npages)
+{
+	struct rrpc_inflight_rq *r = rrpc_get_inflight_rq(rqd);
+	struct rrpc_addr *gp;
+	sector_t laddr = rrpc_get_laddr(bio);
+	int is_gc = flags & NVM_IOTYPE_GC;
+	int i;
+
+	if (!is_gc && rrpc_lock_rq(rrpc, bio, rqd)) {
+		nvm_free_ppalist(rrpc->dev, rqd->ppa_list, rqd->dma_ppa_list);
+		return NVM_IO_REQUEUE;
+	}
+
+	for (i = 0; i < npages; i++) {
+		/* We assume that mapping occurs at 4KB granularity */
+		BUG_ON(!(laddr + i >= 0 && laddr + i < rrpc->nr_pages));
+		gp = &rrpc->trans_map[laddr + i];
+
+		if (gp->rblk) {
+			rqd->ppa_list[i] = gp->addr;
+		} else {
+			BUG_ON(is_gc);
+			rrpc_unlock_laddr(rrpc, r);
+			nvm_free_ppalist(rrpc->dev, rqd->ppa_list,
+							rqd->dma_ppa_list);
+			return NVM_IO_DONE;
+		}
+	}
+
+	return NVM_IO_OK;
+}
+
+static int rrpc_read_rq(struct rrpc *rrpc, struct bio *bio, struct nvm_rq *rqd,
+							unsigned long flags)
+{
+	struct rrpc_rq *rrqd = nvm_rq_to_pdu(rqd);
+	int is_gc = flags & NVM_IOTYPE_GC;
+	sector_t laddr = rrpc_get_laddr(bio);
+	struct rrpc_addr *gp;
+
+	if (!is_gc && rrpc_lock_rq(rrpc, bio, rqd))
+		return NVM_IO_REQUEUE;
+
+	BUG_ON(!(laddr >= 0 && laddr < rrpc->nr_pages));
+	gp = &rrpc->trans_map[laddr];
+
+	if (gp->rblk) {
+		rqd->ppa = rrpc_get_sector(gp->addr);
+	} else {
+		BUG_ON(is_gc);
+		rrpc_unlock_rq(rrpc, rqd);
+		return NVM_IO_DONE;
+	}
+
+	rrqd->addr = gp;
+
+	return NVM_IO_OK;
+}
+
+static int rrpc_write_ppalist_rq(struct rrpc *rrpc, struct bio *bio,
+			struct nvm_rq *rqd, unsigned long flags, int npages)
+{
+	struct rrpc_inflight_rq *r = rrpc_get_inflight_rq(rqd);
+	struct rrpc_addr *p;
+	sector_t laddr = rrpc_get_laddr(bio);
+	int is_gc = flags & NVM_IOTYPE_GC;
+	int i;
+
+	if (!is_gc && rrpc_lock_rq(rrpc, bio, rqd)) {
+		nvm_free_ppalist(rrpc->dev, rqd->ppa_list, rqd->dma_ppa_list);
+		return NVM_IO_REQUEUE;
+	}
+
+	for (i = 0; i < npages; i++) {
+		/* We assume that mapping occurs at 4KB granularity */
+		p = rrpc_map_page(rrpc, laddr + i, is_gc);
+		if (!p) {
+			BUG_ON(is_gc);
+			rrpc_unlock_laddr(rrpc, r);
+			nvm_free_ppalist(rrpc->dev, rqd->ppa_list,
+							rqd->dma_ppa_list);
+			rrpc_gc_kick(rrpc);
+			return NVM_IO_REQUEUE;
+		}
+
+		rqd->ppa_list[i] = p->addr;
+	}
+
+	return NVM_IO_OK;
+}
+
+static int rrpc_write_rq(struct rrpc *rrpc, struct bio *bio,
+				struct nvm_rq *rqd, unsigned long flags)
+{
+	struct rrpc_rq *rrqd = nvm_rq_to_pdu(rqd);
+	struct rrpc_addr *p;
+	int is_gc = flags & NVM_IOTYPE_GC;
+	sector_t laddr = rrpc_get_laddr(bio);
+
+	if (!is_gc && rrpc_lock_rq(rrpc, bio, rqd))
+		return NVM_IO_REQUEUE;
+
+	p = rrpc_map_page(rrpc, laddr, is_gc);
+	if (!p) {
+		BUG_ON(is_gc);
+		rrpc_unlock_rq(rrpc, rqd);
+		rrpc_gc_kick(rrpc);
+		return NVM_IO_REQUEUE;
+	}
+
+	rqd->ppa = rrpc_get_sector(p->addr);
+	rrqd->addr = p;
+
+	return NVM_IO_OK;
+}
+
+static int rrpc_setup_rq(struct rrpc *rrpc, struct bio *bio,
+			struct nvm_rq *rqd, unsigned long flags, uint8_t npages)
+{
+	if (npages > 1) {
+		rqd->ppa_list = nvm_alloc_ppalist(rrpc->dev, GFP_KERNEL,
+							&rqd->dma_ppa_list);
+		if (!rqd->ppa_list) {
+			pr_err("rrpc: not able to allocate ppa list\n");
+			return NVM_IO_ERR;
+		}
+
+		if (bio_rw(bio) == WRITE)
+			return rrpc_write_ppalist_rq(rrpc, bio, rqd, flags,
+									npages);
+
+		return rrpc_read_ppalist_rq(rrpc, bio, rqd, flags, npages);
+	}
+
+	if (bio_rw(bio) == WRITE)
+		return rrpc_write_rq(rrpc, bio, rqd, flags);
+
+	return rrpc_read_rq(rrpc, bio, rqd, flags);
+}
+
+static int rrpc_submit_io(struct rrpc *rrpc, struct bio *bio,
+				struct nvm_rq *rqd, unsigned long flags)
+{
+	int err;
+	struct rrpc_rq *rrq = nvm_rq_to_pdu(rqd);
+	uint8_t npages = rrpc_get_pages(bio);
+
+	err = rrpc_setup_rq(rrpc, bio, rqd, flags, npages);
+	if (err)
+		return err;
+
+	bio_get(bio);
+	rqd->bio = bio;
+	rqd->ins = &rrpc->instance;
+	rqd->npages = npages;
+	rrq->flags = flags;
+
+	err = nvm_submit_io(rrpc->dev, rqd);
+	if (err) {
+		pr_err("rrpc: IO submission failed: %d\n", err);
+		return NVM_IO_ERR;
+	}
+
+	return NVM_IO_OK;
+}
+
+static void rrpc_make_rq(struct request_queue *q, struct bio *bio)
+{
+	struct rrpc *rrpc = q->queuedata;
+	struct nvm_rq *rqd;
+	int err;
+
+	if (bio->bi_rw & REQ_DISCARD) {
+		rrpc_discard(rrpc, bio);
+		return;
+	}
+
+	rqd = mempool_alloc(rrpc->rq_pool, GFP_KERNEL);
+	if (!rqd) {
+		pr_err_ratelimited("rrpc: not able to queue bio.");
+		bio_io_error(bio);
+		return;
+	}
+
+	err = rrpc_submit_io(rrpc, bio, rqd, NVM_IOTYPE_NONE);
+	switch (err) {
+	case NVM_IO_OK:
+		return;
+	case NVM_IO_ERR:
+		if (rqd->ppa_list)
+			nvm_free_ppalist(rrpc->dev, rqd->ppa_list,
+							rqd->dma_ppa_list);
+		bio_io_error(bio);
+		break;
+	case NVM_IO_DONE:
+		bio_endio(bio, 0);
+		break;
+	case NVM_IO_REQUEUE:
+		spin_lock(&rrpc->bio_lock);
+		bio_list_add(&rrpc->requeue_bios, bio);
+		spin_unlock(&rrpc->bio_lock);
+		queue_work(rrpc->kgc_wq, &rrpc->ws_requeue);
+		break;
+	}
+
+	mempool_free(rqd, rrpc->rq_pool);
+}
+
+static void rrpc_requeue(struct work_struct *work)
+{
+	struct rrpc *rrpc = container_of(work, struct rrpc, ws_requeue);
+	struct bio_list bios;
+	struct bio *bio;
+
+	bio_list_init(&bios);
+
+	spin_lock(&rrpc->bio_lock);
+	bio_list_merge(&bios, &rrpc->requeue_bios);
+	bio_list_init(&rrpc->requeue_bios);
+	spin_unlock(&rrpc->bio_lock);
+
+	while ((bio = bio_list_pop(&bios)))
+		rrpc_make_rq(rrpc->disk->queue, bio);
+}
+
+static void rrpc_gc_free(struct rrpc *rrpc)
+{
+	struct rrpc_lun *rlun;
+	int i;
+
+	if (rrpc->krqd_wq)
+		destroy_workqueue(rrpc->krqd_wq);
+
+	if (rrpc->kgc_wq)
+		destroy_workqueue(rrpc->kgc_wq);
+
+	if (!rrpc->luns)
+		return;
+
+	for (i = 0; i < rrpc->nr_luns; i++) {
+		rlun = &rrpc->luns[i];
+
+		if (!rlun->blocks)
+			break;
+		vfree(rlun->blocks);
+	}
+}
+
+static int rrpc_gc_init(struct rrpc *rrpc)
+{
+	rrpc->krqd_wq = alloc_workqueue("rrpc-lun", WQ_MEM_RECLAIM|WQ_UNBOUND,
+						rrpc->nr_luns);
+	if (!rrpc->krqd_wq)
+		return -ENOMEM;
+
+	rrpc->kgc_wq = alloc_workqueue("rrpc-bg", WQ_MEM_RECLAIM, 1);
+	if (!rrpc->kgc_wq)
+		return -ENOMEM;
+
+	setup_timer(&rrpc->gc_timer, rrpc_gc_timer, (unsigned long)rrpc);
+
+	return 0;
+}
+
+static void rrpc_map_free(struct rrpc *rrpc)
+{
+	vfree(rrpc->rev_trans_map);
+	vfree(rrpc->trans_map);
+}
+
+static int rrpc_l2p_update(u64 slba, u64 nlb, u64 *entries, void *private)
+{
+	struct rrpc *rrpc = (struct rrpc *)private;
+	struct nvm_dev *dev = rrpc->dev;
+	struct rrpc_addr *addr = rrpc->trans_map + slba;
+	struct rrpc_rev_addr *raddr = rrpc->rev_trans_map;
+	sector_t max_pages = dev->total_pages * (dev->sector_size >> 9);
+	u64 elba = slba + nlb;
+	u64 i;
+
+	if (unlikely(elba > dev->total_pages)) {
+		pr_err("nvm: L2P data from device is out of bounds!\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < nlb; i++) {
+		u64 pba = le64_to_cpu(entries[i]);
+		/* LNVM treats address-spaces as silos, LBA and PBA are
+		 * equally large and zero-indexed. */
+		if (unlikely(pba >= max_pages && pba != U64_MAX)) {
+			pr_err("nvm: L2P data entry is out of bounds!\n");
+			return -EINVAL;
+		}
+
+		/* Address zero is a special one. The first page on a disk is
+		 * protected. As it often holds internal device boot
+		 * information. */
+		if (!pba)
+			continue;
+
+		addr[i].addr = pba;
+		raddr[pba].addr = slba + i;
+	}
+
+	return 0;
+}
+
+static int rrpc_map_init(struct rrpc *rrpc)
+{
+	struct nvm_dev *dev = rrpc->dev;
+	sector_t i;
+	int ret;
+
+	rrpc->trans_map = vzalloc(sizeof(struct rrpc_addr) * rrpc->nr_pages);
+	if (!rrpc->trans_map)
+		return -ENOMEM;
+
+	rrpc->rev_trans_map = vmalloc(sizeof(struct rrpc_rev_addr)
+							* rrpc->nr_pages);
+	if (!rrpc->rev_trans_map)
+		return -ENOMEM;
+
+	for (i = 0; i < rrpc->nr_pages; i++) {
+		struct rrpc_addr *p = &rrpc->trans_map[i];
+		struct rrpc_rev_addr *r = &rrpc->rev_trans_map[i];
+
+		p->addr = ADDR_EMPTY;
+		r->addr = ADDR_EMPTY;
+	}
+
+	if (!dev->ops->get_l2p_tbl)
+		return 0;
+
+	/* Bring up the mapping table from device */
+	ret = dev->ops->get_l2p_tbl(dev->q, 0, dev->total_pages,
+							rrpc_l2p_update, rrpc);
+	if (ret) {
+		pr_err("nvm: rrpc: could not read L2P table.\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+
+/* Minimum pages needed within a lun */
+#define PAGE_POOL_SIZE 16
+#define ADDR_POOL_SIZE 64
+
+static int rrpc_core_init(struct rrpc *rrpc)
+{
+	down_write(&rrpc_lock);
+	if (!rrpc_gcb_cache) {
+		rrpc_gcb_cache = kmem_cache_create("rrpc_gcb",
+				sizeof(struct rrpc_block_gc), 0, 0, NULL);
+		if (!rrpc_gcb_cache) {
+			up_write(&rrpc_lock);
+			return -ENOMEM;
+		}
+
+		rrpc_rq_cache = kmem_cache_create("rrpc_rq",
+				sizeof(struct nvm_rq) + sizeof(struct rrpc_rq),
+				0, 0, NULL);
+		if (!rrpc_rq_cache) {
+			kmem_cache_destroy(rrpc_gcb_cache);
+			up_write(&rrpc_lock);
+			return -ENOMEM;
+		}
+	}
+	up_write(&rrpc_lock);
+
+	rrpc->page_pool = mempool_create_page_pool(PAGE_POOL_SIZE, 0);
+	if (!rrpc->page_pool)
+		return -ENOMEM;
+
+	rrpc->gcb_pool = mempool_create_slab_pool(rrpc->dev->nr_luns,
+								rrpc_gcb_cache);
+	if (!rrpc->gcb_pool)
+		return -ENOMEM;
+
+	rrpc->rq_pool = mempool_create_slab_pool(64, rrpc_rq_cache);
+	if (!rrpc->rq_pool)
+		return -ENOMEM;
+
+	spin_lock_init(&rrpc->inflights.lock);
+	INIT_LIST_HEAD(&rrpc->inflights.reqs);
+
+	return 0;
+}
+
+static void rrpc_core_free(struct rrpc *rrpc)
+{
+	if (rrpc->page_pool)
+		mempool_destroy(rrpc->page_pool);
+	if (rrpc->gcb_pool)
+		mempool_destroy(rrpc->gcb_pool);
+	if (rrpc->rq_pool)
+		mempool_destroy(rrpc->rq_pool);
+}
+
+static void rrpc_luns_free(struct rrpc *rrpc)
+{
+	kfree(rrpc->luns);
+}
+
+static int rrpc_luns_init(struct rrpc *rrpc, int lun_begin, int lun_end)
+{
+	struct nvm_dev *dev = rrpc->dev;
+	struct nvm_lun *luns;
+	struct rrpc_lun *rlun;
+	int i, j;
+
+	spin_lock_init(&rrpc->rev_lock);
+
+	luns = dev->bm->get_luns(dev, lun_begin, lun_end);
+	if (!luns)
+		return -EINVAL;
+
+	rrpc->luns = kcalloc(rrpc->nr_luns, sizeof(struct rrpc_lun),
+								GFP_KERNEL);
+	if (!rrpc->luns)
+		return -ENOMEM;
+
+	/* 1:1 mapping */
+	for (i = 0; i < rrpc->nr_luns; i++) {
+		struct nvm_lun *lun = &luns[i];
+
+		if (lun->nr_pages_per_blk >
+				MAX_INVALID_PAGES_STORAGE * BITS_PER_LONG) {
+			pr_err("rrpc: number of pages per block too high.");
+			goto err;
+		}
+
+		rlun = &rrpc->luns[i];
+		rlun->rrpc = rrpc;
+		rlun->parent = lun;
+		INIT_LIST_HEAD(&rlun->prio_list);
+		INIT_WORK(&rlun->ws_gc, rrpc_lun_gc);
+		spin_lock_init(&rlun->lock);
+
+		rrpc->total_blocks += lun->nr_blocks;
+		rrpc->nr_pages += lun->nr_blocks * lun->nr_pages_per_blk;
+
+		rlun->blocks = vzalloc(sizeof(struct rrpc_block) *
+						 lun->nr_blocks);
+		if (!rlun->blocks)
+			goto err;
+
+		for (j = 0; j < lun->nr_blocks; j++) {
+			struct rrpc_block *rblk = &rlun->blocks[j];
+			struct nvm_block *blk = &lun->blocks[j];
+
+			rblk->parent = blk;
+			INIT_LIST_HEAD(&rblk->prio);
+			spin_lock_init(&rblk->lock);
+		}
+	}
+
+	return 0;
+err:
+	return -ENOMEM;
+}
+
+static void rrpc_free(struct rrpc *rrpc)
+{
+	rrpc_gc_free(rrpc);
+	rrpc_map_free(rrpc);
+	rrpc_core_free(rrpc);
+	rrpc_luns_free(rrpc);
+
+	kfree(rrpc);
+}
+
+static void rrpc_exit(void *private)
+{
+	struct rrpc *rrpc = private;
+
+	del_timer(&rrpc->gc_timer);
+
+	flush_workqueue(rrpc->krqd_wq);
+	flush_workqueue(rrpc->kgc_wq);
+
+	rrpc_free(rrpc);
+}
+
+static sector_t rrpc_capacity(void *private)
+{
+	struct rrpc *rrpc = private;
+	struct nvm_dev *dev = rrpc->dev;
+	sector_t reserved;
+
+	/* cur, gc, and two emergency blocks for each lun */
+	reserved = rrpc->nr_luns * dev->max_pages_per_blk * 4;
+
+	if (reserved > rrpc->nr_pages) {
+		pr_err("rrpc: not enough space available to expose storage.\n");
+		return 0;
+	}
+
+	return ((rrpc->nr_pages - reserved) / 10) * 9 * NR_PHY_IN_LOG;
+}
+
+/*
+ * Looks up the logical address from reverse trans map and check if its valid by
+ * comparing the logical to physical address with the physical address.
+ * Returns 0 on free, otherwise 1 if in use
+ */
+static void rrpc_block_map_update(struct rrpc *rrpc, struct rrpc_block *rblk)
+{
+	struct nvm_lun *lun = rblk->parent->lun;
+	int offset;
+	struct rrpc_addr *laddr;
+	sector_t paddr, pladdr;
+
+	for (offset = 0; offset < lun->nr_pages_per_blk; offset++) {
+		paddr = block_to_addr(rblk) + offset;
+
+		pladdr = rrpc->rev_trans_map[paddr].addr;
+		if (pladdr == ADDR_EMPTY)
+			continue;
+
+		laddr = &rrpc->trans_map[pladdr];
+
+		if (paddr == laddr->addr) {
+			laddr->rblk = rblk;
+		} else {
+			set_bit(offset, rblk->invalid_pages);
+			rblk->nr_invalid_pages++;
+		}
+	}
+}
+
+static int rrpc_blocks_init(struct rrpc *rrpc)
+{
+	struct rrpc_lun *rlun;
+	struct rrpc_block *rblk;
+	int lun_iter, blk_iter;
+
+	for (lun_iter = 0; lun_iter < rrpc->nr_luns; lun_iter++) {
+		rlun = &rrpc->luns[lun_iter];
+
+		for (blk_iter = 0; blk_iter < rlun->parent->nr_blocks;
+								blk_iter++) {
+			rblk = &rlun->blocks[blk_iter];
+			rrpc_block_map_update(rrpc, rblk);
+		}
+	}
+
+	return 0;
+}
+
+static int rrpc_luns_configure(struct rrpc *rrpc)
+{
+	struct rrpc_lun *rlun;
+	struct rrpc_block *rblk;
+	int i;
+
+	for (i = 0; i < rrpc->nr_luns; i++) {
+		rlun = &rrpc->luns[i];
+
+		rblk = rrpc_get_blk(rrpc, rlun, 0);
+		if (!rblk)
+			return -EINVAL;
+
+		rrpc_set_lun_cur(rlun, rblk);
+
+		/* Emergency gc block */
+		rblk = rrpc_get_blk(rrpc, rlun, 1);
+		if (!rblk)
+			return -EINVAL;
+		rlun->gc_cur = rblk;
+	}
+
+	return 0;
+}
+
+static struct nvm_tgt_type tt_rrpc;
+
+static void *rrpc_init(struct nvm_dev *dev, struct gendisk *tdisk,
+						int lun_begin, int lun_end)
+{
+	struct request_queue *bqueue = dev->q;
+	struct request_queue *tqueue = tdisk->queue;
+	struct rrpc *rrpc;
+	int ret;
+
+	rrpc = kzalloc(sizeof(struct rrpc), GFP_KERNEL);
+	if (!rrpc) {
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	rrpc->instance.tt = &tt_rrpc;
+	rrpc->dev = dev;
+	rrpc->disk = tdisk;
+
+	bio_list_init(&rrpc->requeue_bios);
+	spin_lock_init(&rrpc->bio_lock);
+	INIT_WORK(&rrpc->ws_requeue, rrpc_requeue);
+
+	rrpc->nr_luns = lun_end - lun_begin + 1;
+
+	/* simple round-robin strategy */
+	atomic_set(&rrpc->next_lun, -1);
+
+	ret = rrpc_luns_init(rrpc, lun_begin, lun_end);
+	if (ret) {
+		pr_err("nvm: could not initialize luns\n");
+		goto err;
+	}
+
+	rrpc->poffset = rrpc->luns[0].parent->nr_blocks *
+			rrpc->luns[0].parent->nr_pages_per_blk * lun_begin;
+	rrpc->lun_offset = lun_begin;
+
+	ret = rrpc_core_init(rrpc);
+	if (ret) {
+		pr_err("nvm: rrpc: could not initialize core\n");
+		goto err;
+	}
+
+	ret = rrpc_map_init(rrpc);
+	if (ret) {
+		pr_err("nvm: rrpc: could not initialize maps\n");
+		goto err;
+	}
+
+	ret = rrpc_blocks_init(rrpc);
+	if (ret) {
+		pr_err("nvm: rrpc: could not initialize state for blocks\n");
+		goto err;
+	}
+
+	ret = rrpc_luns_configure(rrpc);
+	if (ret) {
+		pr_err("nvm: rrpc: not enough blocks available in LUNs.\n");
+		goto err;
+	}
+
+	ret = rrpc_gc_init(rrpc);
+	if (ret) {
+		pr_err("nvm: rrpc: could not initialize gc\n");
+		goto err;
+	}
+
+	/* inherit the size from the underlying device */
+	blk_queue_logical_block_size(tqueue, queue_physical_block_size(bqueue));
+	blk_queue_max_hw_sectors(tqueue, queue_max_hw_sectors(bqueue));
+
+	pr_info("nvm: rrpc initialized with %u luns and %llu pages.\n",
+			rrpc->nr_luns, (unsigned long long)rrpc->nr_pages);
+
+	mod_timer(&rrpc->gc_timer, jiffies + msecs_to_jiffies(10));
+
+	return rrpc;
+err:
+	rrpc_free(rrpc);
+	return ERR_PTR(ret);
+}
+
+/* round robin, page-based FTL, and cost-based GC */
+static struct nvm_tgt_type tt_rrpc = {
+	.name		= "rrpc",
+
+	.make_rq	= rrpc_make_rq,
+	.capacity	= rrpc_capacity,
+	.end_io		= rrpc_end_io,
+
+	.init		= rrpc_init,
+	.exit		= rrpc_exit,
+};
+
+static int __init rrpc_module_init(void)
+{
+	return nvm_register_target(&tt_rrpc);
+}
+
+static void rrpc_module_exit(void)
+{
+	nvm_unregister_target(&tt_rrpc);
+}
+
+module_init(rrpc_module_init);
+module_exit(rrpc_module_exit);
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Hybrid Target for Open-Channel SSDs");
diff --git a/drivers/lightnvm/rrpc.h b/drivers/lightnvm/rrpc.h
new file mode 100644
index 0000000..706ba0f
--- /dev/null
+++ b/drivers/lightnvm/rrpc.h
@@ -0,0 +1,236 @@
+/*
+ * Copyright (C) 2015 IT University of Copenhagen
+ * Initial release: Matias Bjorling <mabj@itu.dk>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Implementation of a Round-robin page-based Hybrid FTL for Open-channel SSDs.
+ */
+
+#ifndef RRPC_H_
+#define RRPC_H_
+
+#include <linux/blkdev.h>
+#include <linux/blk-mq.h>
+#include <linux/bio.h>
+#include <linux/module.h>
+#include <linux/kthread.h>
+#include <linux/vmalloc.h>
+
+#include <linux/lightnvm.h>
+
+/* Run only GC if less than 1/X blocks are free */
+#define GC_LIMIT_INVERSE 10
+#define GC_TIME_SECS 100
+
+#define RRPC_SECTOR (512)
+#define RRPC_EXPOSED_PAGE_SIZE (4096)
+
+#define NR_PHY_IN_LOG (RRPC_EXPOSED_PAGE_SIZE / RRPC_SECTOR)
+
+struct rrpc_inflight {
+	struct list_head reqs;
+	spinlock_t lock;
+};
+
+struct rrpc_inflight_rq {
+	struct list_head list;
+	sector_t l_start;
+	sector_t l_end;
+};
+
+struct rrpc_rq {
+	struct rrpc_inflight_rq inflight_rq;
+	struct rrpc_addr *addr;
+	unsigned long flags;
+};
+
+struct rrpc_block {
+	struct nvm_block *parent;
+	struct list_head prio;
+
+#define MAX_INVALID_PAGES_STORAGE 8
+	/* Bitmap for invalid page intries */
+	unsigned long invalid_pages[MAX_INVALID_PAGES_STORAGE];
+	/* points to the next writable page within a block */
+	unsigned int next_page;
+	/* number of pages that are invalid, wrt host page size */
+	unsigned int nr_invalid_pages;
+
+	spinlock_t lock;
+	atomic_t data_cmnt_size; /* data pages committed to stable storage */
+};
+
+struct rrpc_lun {
+	struct rrpc *rrpc;
+	struct nvm_lun *parent;
+	struct rrpc_block *cur, *gc_cur;
+	struct rrpc_block *blocks;	/* Reference to block allocation */
+	struct list_head prio_list;		/* Blocks that may be GC'ed */
+	struct work_struct ws_gc;
+
+	spinlock_t lock;
+};
+
+struct rrpc {
+	/* instance must be kept in top to resolve rrpc in unprep */
+	struct nvm_tgt_instance instance;
+
+	struct nvm_dev *dev;
+	struct gendisk *disk;
+
+	sector_t poffset; /* physical page offset */
+	int lun_offset;
+
+	int nr_luns;
+	struct rrpc_lun *luns;
+
+	/* calculated values */
+	unsigned long nr_pages;
+	unsigned long total_blocks;
+
+	/* Write strategy variables. Move these into each for structure for each
+	 * strategy */
+	atomic_t next_lun; /* Whenever a page is written, this is updated
+			    * to point to the next write lun */
+
+	spinlock_t bio_lock;
+	struct bio_list requeue_bios;
+	struct work_struct ws_requeue;
+
+	/* Simple translation map of logical addresses to physical addresses.
+	 * The logical addresses is known by the host system, while the physical
+	 * addresses are used when writing to the disk block device. */
+	struct rrpc_addr *trans_map;
+	/* also store a reverse map for garbage collection */
+	struct rrpc_rev_addr *rev_trans_map;
+	spinlock_t rev_lock;
+
+	struct rrpc_inflight inflights;
+
+	mempool_t *addr_pool;
+	mempool_t *page_pool;
+	mempool_t *gcb_pool;
+	mempool_t *rq_pool;
+
+	struct timer_list gc_timer;
+	struct workqueue_struct *krqd_wq;
+	struct workqueue_struct *kgc_wq;
+};
+
+struct rrpc_block_gc {
+	struct rrpc *rrpc;
+	struct rrpc_block *rblk;
+	struct work_struct ws_gc;
+};
+
+/* Logical to physical mapping */
+struct rrpc_addr {
+	sector_t addr;
+	struct rrpc_block *rblk;
+};
+
+/* Physical to logical mapping */
+struct rrpc_rev_addr {
+	sector_t addr;
+};
+
+static inline sector_t rrpc_get_laddr(struct bio *bio)
+{
+	return bio->bi_iter.bi_sector / NR_PHY_IN_LOG;
+}
+
+static inline unsigned int rrpc_get_pages(struct bio *bio)
+{
+	return  bio->bi_iter.bi_size / RRPC_EXPOSED_PAGE_SIZE;
+}
+
+static inline sector_t rrpc_get_sector(sector_t laddr)
+{
+	return laddr * NR_PHY_IN_LOG;
+}
+
+static inline int request_intersects(struct rrpc_inflight_rq *r,
+				sector_t laddr_start, sector_t laddr_end)
+{
+	return (laddr_end >= r->l_start && laddr_end <= r->l_end) &&
+		(laddr_start >= r->l_start && laddr_start <= r->l_end);
+}
+
+static int __rrpc_lock_laddr(struct rrpc *rrpc, sector_t laddr,
+			     unsigned pages, struct rrpc_inflight_rq *r)
+{
+	sector_t laddr_end = laddr + pages - 1;
+	struct rrpc_inflight_rq *rtmp;
+
+	spin_lock_irq(&rrpc->inflights.lock);
+	list_for_each_entry(rtmp, &rrpc->inflights.reqs, list) {
+		if (unlikely(request_intersects(rtmp, laddr, laddr_end))) {
+			/* existing, overlapping request, come back later */
+			spin_unlock_irq(&rrpc->inflights.lock);
+			return 1;
+		}
+	}
+
+	r->l_start = laddr;
+	r->l_end = laddr_end;
+
+	list_add_tail(&r->list, &rrpc->inflights.reqs);
+	spin_unlock_irq(&rrpc->inflights.lock);
+	return 0;
+}
+
+static inline int rrpc_lock_laddr(struct rrpc *rrpc, sector_t laddr,
+				 unsigned pages,
+				 struct rrpc_inflight_rq *r)
+{
+	BUG_ON((laddr + pages) > rrpc->nr_pages);
+
+	return __rrpc_lock_laddr(rrpc, laddr, pages, r);
+}
+
+static inline struct rrpc_inflight_rq *rrpc_get_inflight_rq(struct nvm_rq *rqd)
+{
+	struct rrpc_rq *rrqd = nvm_rq_to_pdu(rqd);
+
+	return &rrqd->inflight_rq;
+}
+
+static inline int rrpc_lock_rq(struct rrpc *rrpc, struct bio *bio,
+							struct nvm_rq *rqd)
+{
+	sector_t laddr = rrpc_get_laddr(bio);
+	unsigned int pages = rrpc_get_pages(bio);
+	struct rrpc_inflight_rq *r = rrpc_get_inflight_rq(rqd);
+
+	return rrpc_lock_laddr(rrpc, laddr, pages, r);
+}
+
+static inline void rrpc_unlock_laddr(struct rrpc *rrpc,
+						struct rrpc_inflight_rq *r)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&rrpc->inflights.lock, flags);
+	list_del_init(&r->list);
+	spin_unlock_irqrestore(&rrpc->inflights.lock, flags);
+}
+
+static inline void rrpc_unlock_rq(struct rrpc *rrpc, struct nvm_rq *rqd)
+{
+	struct rrpc_inflight_rq *r = rrpc_get_inflight_rq(rqd);
+	uint8_t pages = rqd->npages;
+
+	BUG_ON((r->l_start + pages) > rrpc->nr_pages);
+
+	rrpc_unlock_laddr(rrpc, r);
+}
+
+#endif /* RRPC_H_ */
diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
index 9654354..0ac73d5 100644
--- a/include/linux/lightnvm.h
+++ b/include/linux/lightnvm.h
@@ -135,6 +135,7 @@ struct nvm_dev_ops {
 	nvm_alloc_ppalist_fn	*alloc_ppalist;
 	nvm_free_ppalist_fn	*free_ppalist;
 
+	int			dev_sector_size;
 	uint8_t			max_phys_sect;
 };
 
@@ -286,10 +287,8 @@ extern int nvm_submit_io(struct nvm_dev *, struct nvm_rq *);
  * bytes chunks. This should be set to the smallest command size available for a
  * given device.
  */
-#define NVM_SECTOR (512)
-#define EXPOSED_PAGE_SIZE (4096)
 
-#define NR_PHY_IN_LOG (EXPOSED_PAGE_SIZE / NVM_SECTOR)
+#define DEV_EXPOSED_PAGE_SIZE (4096)
 
 #define NVM_MSG_PREFIX "nvm"
 #define ADDR_EMPTY (~0ULL)
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v7 2/5] lightnvm: Hybrid Open-Channel SSD RRPC target
@ 2015-08-07 14:29   ` Matias Bjørling
  0 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-08-07 14:29 UTC (permalink / raw)


This target implements a simple strategy FTL for Open-Channel SSDs.
It does round-robin selection across channels and luns. It uses a
simple greedy cost-based garbage collector and exposes the physical
flash as a block device.

Signed-off-by: Javier Gonz?lez <jg at lightnvm.io>
Signed-off-by: Matias Bj?rling <mb at lightnvm.io>
---
 drivers/lightnvm/Kconfig  |   10 +
 drivers/lightnvm/Makefile |    1 +
 drivers/lightnvm/core.c   |    3 +-
 drivers/lightnvm/rrpc.c   | 1296 +++++++++++++++++++++++++++++++++++++++++++++
 drivers/lightnvm/rrpc.h   |  236 +++++++++
 include/linux/lightnvm.h  |    5 +-
 6 files changed, 1547 insertions(+), 4 deletions(-)
 create mode 100644 drivers/lightnvm/rrpc.c
 create mode 100644 drivers/lightnvm/rrpc.h

diff --git a/drivers/lightnvm/Kconfig b/drivers/lightnvm/Kconfig
index 1f8412c..ab1fe57 100644
--- a/drivers/lightnvm/Kconfig
+++ b/drivers/lightnvm/Kconfig
@@ -14,3 +14,13 @@ menuconfig NVM
 	  If you say N, all options in this submenu will be skipped and disabled
 	  only do this if you know what you are doing.
 
+if NVM
+
+config NVM_RRPC
+	tristate "Round-robin Hybrid Open-Channel SSD target"
+	---help---
+	Allows an open-channel SSD to be exposed as a block device to the
+	host. The target is implemented using a linear mapping table and
+	cost-based garbage collection. It is optimized for 4K IO sizes.
+
+endif # NVM
diff --git a/drivers/lightnvm/Makefile b/drivers/lightnvm/Makefile
index 38185e9..b2a39e2 100644
--- a/drivers/lightnvm/Makefile
+++ b/drivers/lightnvm/Makefile
@@ -3,3 +3,4 @@
 #
 
 obj-$(CONFIG_NVM)		:= core.o
+obj-$(CONFIG_NVM_RRPC)		+= rrpc.o
diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 6499922..5e4c2b8 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -169,7 +169,7 @@ static void nvm_core_free(struct nvm_dev *dev)
 static int nvm_core_init(struct nvm_dev *dev)
 {
 	dev->nr_luns = dev->identity.nchannels;
-	dev->sector_size = EXPOSED_PAGE_SIZE;
+	dev->sector_size = dev->ops->dev_sector_size;
 	INIT_LIST_HEAD(&dev->online_targets);
 
 	return 0;
@@ -541,6 +541,7 @@ int nvm_register(struct request_queue *q, char *disk_name,
 
 	dev->q = q;
 	dev->ops = ops;
+	dev->ops->dev_sector_size = DEV_EXPOSED_PAGE_SIZE;
 	strncpy(dev->name, disk_name, DISK_NAME_LEN);
 
 	ret = nvm_init(dev);
diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c
new file mode 100644
index 0000000..5843383
--- /dev/null
+++ b/drivers/lightnvm/rrpc.c
@@ -0,0 +1,1296 @@
+/*
+ * Copyright (C) 2015 IT University of Copenhagen
+ * Initial release: Matias Bjorling <mabj at itu.dk>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Implementation of a Round-robin page-based Hybrid FTL for Open-channel SSDs.
+ */
+
+#include "rrpc.h"
+
+static struct kmem_cache *rrpc_gcb_cache, *rrpc_rq_cache;
+static DECLARE_RWSEM(rrpc_lock);
+
+static int rrpc_submit_io(struct rrpc *rrpc, struct bio *bio,
+				struct nvm_rq *rqd, unsigned long flags);
+
+#define rrpc_for_each_lun(rrpc, rlun, i) \
+		for ((i) = 0, rlun = &(rrpc)->luns[0]; \
+			(i) < (rrpc)->nr_luns; (i)++, rlun = &(rrpc)->luns[(i)])
+
+static void rrpc_page_invalidate(struct rrpc *rrpc, struct rrpc_addr *a)
+{
+	struct rrpc_block *rblk = a->rblk;
+	unsigned int pg_offset;
+
+	lockdep_assert_held(&rrpc->rev_lock);
+
+	if (a->addr == ADDR_EMPTY || !rblk)
+		return;
+
+	spin_lock(&rblk->lock);
+
+	pg_offset = a->addr % rblk->parent->lun->nr_pages_per_blk;
+	WARN_ON(test_and_set_bit(pg_offset, rblk->invalid_pages));
+	rblk->nr_invalid_pages++;
+
+	spin_unlock(&rblk->lock);
+
+	rrpc->rev_trans_map[a->addr - rrpc->poffset].addr = ADDR_EMPTY;
+}
+
+static void rrpc_invalidate_range(struct rrpc *rrpc, sector_t slba,
+								unsigned len)
+{
+	sector_t i;
+
+	spin_lock(&rrpc->rev_lock);
+	for (i = slba; i < slba + len; i++) {
+		struct rrpc_addr *gp = &rrpc->trans_map[i];
+
+		rrpc_page_invalidate(rrpc, gp);
+		gp->rblk = NULL;
+	}
+	spin_unlock(&rrpc->rev_lock);
+}
+
+static struct nvm_rq *rrpc_inflight_laddr_acquire(struct rrpc *rrpc,
+					sector_t laddr, unsigned int pages)
+{
+	struct nvm_rq *rqd;
+	struct rrpc_inflight_rq *inf;
+
+	rqd = mempool_alloc(rrpc->rq_pool, GFP_ATOMIC);
+	if (!rqd)
+		return ERR_PTR(-ENOMEM);
+
+	inf = rrpc_get_inflight_rq(rqd);
+	if (rrpc_lock_laddr(rrpc, laddr, pages, inf)) {
+		mempool_free(rqd, rrpc->rq_pool);
+		return NULL;
+	}
+
+	return rqd;
+}
+
+static void rrpc_inflight_laddr_release(struct rrpc *rrpc, struct nvm_rq *rqd)
+{
+	struct rrpc_inflight_rq *inf = rrpc_get_inflight_rq(rqd);
+
+	rrpc_unlock_laddr(rrpc, inf);
+
+	mempool_free(rqd, rrpc->rq_pool);
+}
+
+static void rrpc_discard(struct rrpc *rrpc, struct bio *bio)
+{
+	sector_t slba = bio->bi_iter.bi_sector / NR_PHY_IN_LOG;
+	sector_t len = bio->bi_iter.bi_size / RRPC_EXPOSED_PAGE_SIZE;
+	struct nvm_rq *rqd;
+
+	do {
+		rqd = rrpc_inflight_laddr_acquire(rrpc, slba, len);
+		schedule();
+	} while (!rqd);
+
+	if (IS_ERR(rqd)) {
+		pr_err("rrpc: unable to acquire inflight IO\n");
+		bio_io_error(bio);
+		return;
+	}
+
+	rrpc_invalidate_range(rrpc, slba, len);
+	rrpc_inflight_laddr_release(rrpc, rqd);
+}
+
+static int block_is_full(struct rrpc_lun *rlun, struct rrpc_block *rblk)
+{
+	struct nvm_lun *lun = rlun->parent;
+
+	return (rblk->next_page == lun->nr_pages_per_blk);
+}
+
+static sector_t block_to_addr(struct rrpc_block *rblk)
+{
+	struct nvm_block *blk = rblk->parent;
+	struct nvm_lun *lun = rblk->parent->lun;
+
+	return blk->id * lun->nr_pages_per_blk;
+}
+
+/* requires lun->lock taken */
+static void rrpc_set_lun_cur(struct rrpc_lun *rlun, struct rrpc_block *rblk)
+{
+	BUG_ON(!rblk);
+
+	if (rlun->cur) {
+		spin_lock(&rlun->cur->lock);
+		WARN_ON(!block_is_full(rlun, rlun->cur));
+		spin_unlock(&rlun->cur->lock);
+	}
+	rlun->cur = rblk;
+}
+
+static struct rrpc_block *rrpc_get_blk(struct rrpc *rrpc, struct rrpc_lun *rlun,
+							unsigned long flags)
+{
+	struct nvm_block *blk;
+	struct rrpc_block *rblk;
+
+	blk = nvm_get_blk(rrpc->dev, rlun->parent, 0);
+	if (!blk)
+		return NULL;
+
+	rblk = &rlun->blocks[blk->id];
+	blk->priv = rblk;
+
+	bitmap_zero(rblk->invalid_pages, rlun->parent->nr_pages_per_blk);
+	rblk->next_page = 0;
+	rblk->nr_invalid_pages = 0;
+	atomic_set(&rblk->data_cmnt_size, 0);
+
+	return rblk;
+}
+
+static void rrpc_put_blk(struct rrpc *rrpc, struct rrpc_block *rblk)
+{
+	nvm_put_blk(rrpc->dev, rblk->parent);
+}
+
+static struct rrpc_lun *get_next_lun(struct rrpc *rrpc)
+{
+	int next = atomic_inc_return(&rrpc->next_lun);
+
+	return &rrpc->luns[next % rrpc->nr_luns];
+}
+
+static void rrpc_gc_kick(struct rrpc *rrpc)
+{
+	struct rrpc_lun *rlun;
+	unsigned int i;
+
+	for (i = 0; i < rrpc->nr_luns; i++) {
+		rlun = &rrpc->luns[i];
+		queue_work(rrpc->krqd_wq, &rlun->ws_gc);
+	}
+}
+
+/*
+ * timed GC every interval.
+ */
+static void rrpc_gc_timer(unsigned long data)
+{
+	struct rrpc *rrpc = (struct rrpc *)data;
+
+	rrpc_gc_kick(rrpc);
+	mod_timer(&rrpc->gc_timer, jiffies + msecs_to_jiffies(10));
+}
+
+static void rrpc_end_sync_bio(struct bio *bio, int error)
+{
+	struct completion *waiting = bio->bi_private;
+
+	if (error)
+		pr_err("nvm: gc request failed (%u).\n", error);
+
+	complete(waiting);
+}
+
+/*
+ * rrpc_move_valid_pages -- migrate live data off the block
+ * @rrpc: the 'rrpc' structure
+ * @block: the block from which to migrate live pages
+ *
+ * Description:
+ *   GC algorithms may call this function to migrate remaining live
+ *   pages off the block prior to erasing it. This function blocks
+ *   further execution until the operation is complete.
+ */
+static int rrpc_move_valid_pages(struct rrpc *rrpc, struct rrpc_block *rblk)
+{
+	struct request_queue *q = rrpc->dev->q;
+	struct rrpc_rev_addr *rev;
+	struct nvm_rq *rqd;
+	struct bio *bio;
+	struct page *page;
+	int slot;
+	int nr_pgs_per_blk = rblk->parent->lun->nr_pages_per_blk;
+	sector_t phys_addr;
+	DECLARE_COMPLETION_ONSTACK(wait);
+
+	if (bitmap_full(rblk->invalid_pages, nr_pgs_per_blk))
+		return 0;
+
+	bio = bio_alloc(GFP_NOIO, 1);
+	if (!bio) {
+		pr_err("nvm: could not alloc bio to gc\n");
+		return -ENOMEM;
+	}
+
+	page = mempool_alloc(rrpc->page_pool, GFP_NOIO);
+
+	while ((slot = find_first_zero_bit(rblk->invalid_pages,
+					    nr_pgs_per_blk)) < nr_pgs_per_blk) {
+
+		/* Lock laddr */
+		phys_addr = (rblk->parent->id * nr_pgs_per_blk) + slot;
+
+try:
+		spin_lock(&rrpc->rev_lock);
+		/* Get logical address from physical to logical table */
+		rev = &rrpc->rev_trans_map[phys_addr - rrpc->poffset];
+		/* already updated by previous regular write */
+		if (rev->addr == ADDR_EMPTY) {
+			spin_unlock(&rrpc->rev_lock);
+			continue;
+		}
+
+		rqd = rrpc_inflight_laddr_acquire(rrpc, rev->addr, 1);
+		if (IS_ERR_OR_NULL(rqd)) {
+			spin_unlock(&rrpc->rev_lock);
+			schedule();
+			goto try;
+		}
+
+		spin_unlock(&rrpc->rev_lock);
+
+		/* Perform read to do GC */
+		bio->bi_iter.bi_sector = rrpc_get_sector(rev->addr);
+		bio->bi_rw = READ;
+		bio->bi_private = &wait;
+		bio->bi_end_io = rrpc_end_sync_bio;
+
+		/* TODO: may fail when EXP_PG_SIZE > PAGE_SIZE */
+		bio_add_pc_page(q, bio, page, RRPC_EXPOSED_PAGE_SIZE, 0);
+
+		if (rrpc_submit_io(rrpc, bio, rqd, NVM_IOTYPE_GC)) {
+			pr_err("rrpc: gc read failed.\n");
+			rrpc_inflight_laddr_release(rrpc, rqd);
+			goto finished;
+		}
+		wait_for_completion_io(&wait);
+
+		bio_reset(bio);
+		reinit_completion(&wait);
+
+		bio->bi_iter.bi_sector = rrpc_get_sector(rev->addr);
+		bio->bi_rw = WRITE;
+		bio->bi_private = &wait;
+		bio->bi_end_io = rrpc_end_sync_bio;
+
+		bio_add_pc_page(q, bio, page, RRPC_EXPOSED_PAGE_SIZE, 0);
+
+		/* turn the command around and write the data back to a new
+		 * address */
+		if (rrpc_submit_io(rrpc, bio, rqd, NVM_IOTYPE_GC)) {
+			pr_err("rrpc: gc write failed.\n");
+			rrpc_inflight_laddr_release(rrpc, rqd);
+			goto finished;
+		}
+		wait_for_completion_io(&wait);
+
+		rrpc_inflight_laddr_release(rrpc, rqd);
+
+		bio_reset(bio);
+	}
+
+finished:
+	mempool_free(page, rrpc->page_pool);
+	bio_put(bio);
+
+	if (!bitmap_full(rblk->invalid_pages, nr_pgs_per_blk)) {
+		pr_err("nvm: failed to garbage collect block\n");
+		return -EIO;
+	}
+
+	return 0;
+}
+
+static void rrpc_block_gc(struct work_struct *work)
+{
+	struct rrpc_block_gc *gcb = container_of(work, struct rrpc_block_gc,
+									ws_gc);
+	struct rrpc *rrpc = gcb->rrpc;
+	struct rrpc_block *rblk = gcb->rblk;
+	struct nvm_dev *dev = rrpc->dev;
+
+	pr_debug("nvm: block '%llu' being reclaimed\n", rblk->parent->id);
+
+	if (rrpc_move_valid_pages(rrpc, rblk))
+		goto done;
+
+	nvm_erase_blk(dev, rblk->parent);
+	rrpc_put_blk(rrpc, rblk);
+done:
+	mempool_free(gcb, rrpc->gcb_pool);
+}
+
+/* the block with highest number of invalid pages, will be in the beginning
+ * of the list */
+static struct rrpc_block *rblock_max_invalid(struct rrpc_block *ra,
+							struct rrpc_block *rb)
+{
+	if (ra->nr_invalid_pages == rb->nr_invalid_pages)
+		return ra;
+
+	return (ra->nr_invalid_pages < rb->nr_invalid_pages) ? rb : ra;
+}
+
+/* linearly find the block with highest number of invalid pages
+ * requires lun->lock */
+static struct rrpc_block *block_prio_find_max(struct rrpc_lun *rlun)
+{
+	struct list_head *prio_list = &rlun->prio_list;
+	struct rrpc_block *rblock, *max;
+
+	BUG_ON(list_empty(prio_list));
+
+	max = list_first_entry(prio_list, struct rrpc_block, prio);
+	list_for_each_entry(rblock, prio_list, prio)
+		max = rblock_max_invalid(max, rblock);
+
+	return max;
+}
+
+static void rrpc_lun_gc(struct work_struct *work)
+{
+	struct rrpc_lun *rlun = container_of(work, struct rrpc_lun, ws_gc);
+	struct rrpc *rrpc = rlun->rrpc;
+	struct nvm_lun *lun = rlun->parent;
+	struct rrpc_block_gc *gcb;
+	unsigned int nr_blocks_need;
+
+	nr_blocks_need = lun->nr_blocks / GC_LIMIT_INVERSE;
+
+	if (nr_blocks_need < rrpc->nr_luns)
+		nr_blocks_need = rrpc->nr_luns;
+
+	spin_lock(&lun->lock);
+	while (nr_blocks_need > lun->nr_free_blocks &&
+					!list_empty(&rlun->prio_list)) {
+		struct rrpc_block *rblock = block_prio_find_max(rlun);
+		struct nvm_block *block = rblock->parent;
+
+		if (!rblock->nr_invalid_pages)
+			break;
+
+		list_del_init(&rblock->prio);
+
+		BUG_ON(!block_is_full(rlun, rblock));
+
+		pr_debug("rrpc: selected block '%llu' for GC\n", block->id);
+
+		gcb = mempool_alloc(rrpc->gcb_pool, GFP_ATOMIC);
+		if (!gcb)
+			break;
+
+		gcb->rrpc = rrpc;
+		gcb->rblk = rblock;
+		INIT_WORK(&gcb->ws_gc, rrpc_block_gc);
+
+		queue_work(rrpc->kgc_wq, &gcb->ws_gc);
+
+		nr_blocks_need--;
+	}
+	spin_unlock(&lun->lock);
+
+	/* TODO: Hint that request queue can be started again */
+}
+
+static void rrpc_gc_queue(struct work_struct *work)
+{
+	struct rrpc_block_gc *gcb = container_of(work, struct rrpc_block_gc,
+									ws_gc);
+	struct rrpc *rrpc = gcb->rrpc;
+	struct rrpc_block *rblk = gcb->rblk;
+	struct nvm_lun *lun = rblk->parent->lun;
+	struct rrpc_lun *rlun = &rrpc->luns[lun->id - rrpc->lun_offset];
+
+	spin_lock(&rlun->lock);
+	list_add_tail(&rblk->prio, &rlun->prio_list);
+	spin_unlock(&rlun->lock);
+
+	mempool_free(gcb, rrpc->gcb_pool);
+	pr_debug("nvm: block '%llu' is full, allow GC (sched)\n",
+							rblk->parent->id);
+}
+
+static const struct block_device_operations rrpc_fops = {
+	.owner		= THIS_MODULE,
+};
+
+static struct rrpc_lun *rrpc_get_lun_rr(struct rrpc *rrpc, int is_gc)
+{
+	unsigned int i;
+	struct rrpc_lun *rlun, *max_free;
+
+	if (!is_gc)
+		return get_next_lun(rrpc);
+
+	/* during GC, we don't care about RR, instead we want to make
+	 * sure that we maintain evenness between the block luns. */
+	max_free = &rrpc->luns[0];
+	/* prevent GC-ing lun from devouring pages of a lun with
+	 * little free blocks. We don't take the lock as we only need an
+	 * estimate. */
+	rrpc_for_each_lun(rrpc, rlun, i) {
+		if (rlun->parent->nr_free_blocks >
+					max_free->parent->nr_free_blocks)
+			max_free = rlun;
+	}
+
+	return max_free;
+}
+
+static struct rrpc_addr *rrpc_update_map(struct rrpc *rrpc, sector_t laddr,
+					struct rrpc_block *rblk, sector_t paddr)
+{
+	struct rrpc_addr *gp;
+	struct rrpc_rev_addr *rev;
+
+	BUG_ON(laddr >= rrpc->nr_pages);
+
+	gp = &rrpc->trans_map[laddr];
+	spin_lock(&rrpc->rev_lock);
+	if (gp->rblk)
+		rrpc_page_invalidate(rrpc, gp);
+
+	gp->addr = paddr;
+	gp->rblk = rblk;
+
+	rev = &rrpc->rev_trans_map[gp->addr - rrpc->poffset];
+	rev->addr = laddr;
+	spin_unlock(&rrpc->rev_lock);
+
+	return gp;
+}
+
+static sector_t rrpc_alloc_addr(struct rrpc_lun *rlun, struct rrpc_block *rblk)
+{
+	sector_t addr = ADDR_EMPTY;
+
+	spin_lock(&rblk->lock);
+	if (block_is_full(rlun, rblk))
+		goto out;
+
+	addr = block_to_addr(rblk) + rblk->next_page;
+
+	rblk->next_page++;
+out:
+	spin_unlock(&rblk->lock);
+	return addr;
+}
+
+/* Simple round-robin Logical to physical address translation.
+ *
+ * Retrieve the mapping using the active append point. Then update the ap for
+ * the next write to the disk.
+ *
+ * Returns rrpc_addr with the physical address and block. Remember to return to
+ * rrpc->addr_cache when request is finished.
+ */
+static struct rrpc_addr *rrpc_map_page(struct rrpc *rrpc, sector_t laddr,
+								int is_gc)
+{
+	struct rrpc_lun *rlun;
+	struct rrpc_block *rblk;
+	struct nvm_lun *lun;
+	sector_t paddr;
+
+	rlun = rrpc_get_lun_rr(rrpc, is_gc);
+	lun = rlun->parent;
+
+	if (!is_gc && lun->nr_free_blocks < rrpc->nr_luns * 4)
+		return NULL;
+
+	spin_lock(&rlun->lock);
+
+	rblk = rlun->cur;
+retry:
+	paddr = rrpc_alloc_addr(rlun, rblk);
+
+	if (paddr == ADDR_EMPTY) {
+		rblk = rrpc_get_blk(rrpc, rlun, 0);
+		if (rblk) {
+			rrpc_set_lun_cur(rlun, rblk);
+			goto retry;
+		}
+
+		if (is_gc) {
+			/* retry from emergency gc block */
+			paddr = rrpc_alloc_addr(rlun, rlun->gc_cur);
+			if (paddr == ADDR_EMPTY) {
+				rblk = rrpc_get_blk(rrpc, rlun, 1);
+				if (!rblk) {
+					pr_err("rrpc: no more blocks");
+					goto err;
+				}
+
+				rlun->gc_cur = rblk;
+				paddr = rrpc_alloc_addr(rlun, rlun->gc_cur);
+			}
+			rblk = rlun->gc_cur;
+		}
+	}
+
+	spin_unlock(&rlun->lock);
+	return rrpc_update_map(rrpc, laddr, rblk, paddr);
+err:
+	spin_unlock(&rlun->lock);
+	return NULL;
+}
+
+static void rrpc_run_gc(struct rrpc *rrpc, struct rrpc_block *rblk)
+{
+	struct rrpc_block_gc *gcb;
+
+	gcb = mempool_alloc(rrpc->gcb_pool, GFP_ATOMIC);
+	if (!gcb) {
+		pr_err("rrpc: unable to queue block for gc.");
+		return;
+	}
+
+	gcb->rrpc = rrpc;
+	gcb->rblk = rblk;
+
+	INIT_WORK(&gcb->ws_gc, rrpc_gc_queue);
+	queue_work(rrpc->kgc_wq, &gcb->ws_gc);
+}
+
+static void rrpc_end_io_write(struct rrpc *rrpc, struct rrpc_rq *rrqd,
+						sector_t laddr, uint8_t npages)
+{
+	struct rrpc_addr *p;
+	struct rrpc_block *rblk;
+	struct nvm_lun *lun;
+	int cmnt_size, i;
+
+	for (i = 0; i < npages; i++) {
+		p = &rrpc->trans_map[laddr + i];
+		rblk = p->rblk;
+		lun = rblk->parent->lun;
+
+		cmnt_size = atomic_inc_return(&rblk->data_cmnt_size);
+		if (unlikely(cmnt_size == lun->nr_pages_per_blk))
+			rrpc_run_gc(rrpc, rblk);
+	}
+}
+
+static void rrpc_end_io(struct nvm_rq *rqd, int error)
+{
+	struct rrpc *rrpc = container_of(rqd->ins, struct rrpc, instance);
+	struct rrpc_rq *rrqd = nvm_rq_to_pdu(rqd);
+	uint8_t npages = rqd->npages;
+	sector_t laddr = rrpc_get_laddr(rqd->bio) - npages;
+
+	if (bio_data_dir(rqd->bio) == WRITE)
+		rrpc_end_io_write(rrpc, rrqd, laddr, npages);
+
+	if (rrqd->flags & NVM_IOTYPE_GC)
+		return;
+
+	rrpc_unlock_rq(rrpc, rqd);
+	bio_put(rqd->bio);
+
+	if (npages > 1)
+		nvm_free_ppalist(rrpc->dev, rqd->ppa_list, rqd->dma_ppa_list);
+
+
+	mempool_free(rqd, rrpc->rq_pool);
+}
+
+static int rrpc_read_ppalist_rq(struct rrpc *rrpc, struct bio *bio,
+			struct nvm_rq *rqd, unsigned long flags, int npages)
+{
+	struct rrpc_inflight_rq *r = rrpc_get_inflight_rq(rqd);
+	struct rrpc_addr *gp;
+	sector_t laddr = rrpc_get_laddr(bio);
+	int is_gc = flags & NVM_IOTYPE_GC;
+	int i;
+
+	if (!is_gc && rrpc_lock_rq(rrpc, bio, rqd)) {
+		nvm_free_ppalist(rrpc->dev, rqd->ppa_list, rqd->dma_ppa_list);
+		return NVM_IO_REQUEUE;
+	}
+
+	for (i = 0; i < npages; i++) {
+		/* We assume that mapping occurs at 4KB granularity */
+		BUG_ON(!(laddr + i >= 0 && laddr + i < rrpc->nr_pages));
+		gp = &rrpc->trans_map[laddr + i];
+
+		if (gp->rblk) {
+			rqd->ppa_list[i] = gp->addr;
+		} else {
+			BUG_ON(is_gc);
+			rrpc_unlock_laddr(rrpc, r);
+			nvm_free_ppalist(rrpc->dev, rqd->ppa_list,
+							rqd->dma_ppa_list);
+			return NVM_IO_DONE;
+		}
+	}
+
+	return NVM_IO_OK;
+}
+
+static int rrpc_read_rq(struct rrpc *rrpc, struct bio *bio, struct nvm_rq *rqd,
+							unsigned long flags)
+{
+	struct rrpc_rq *rrqd = nvm_rq_to_pdu(rqd);
+	int is_gc = flags & NVM_IOTYPE_GC;
+	sector_t laddr = rrpc_get_laddr(bio);
+	struct rrpc_addr *gp;
+
+	if (!is_gc && rrpc_lock_rq(rrpc, bio, rqd))
+		return NVM_IO_REQUEUE;
+
+	BUG_ON(!(laddr >= 0 && laddr < rrpc->nr_pages));
+	gp = &rrpc->trans_map[laddr];
+
+	if (gp->rblk) {
+		rqd->ppa = rrpc_get_sector(gp->addr);
+	} else {
+		BUG_ON(is_gc);
+		rrpc_unlock_rq(rrpc, rqd);
+		return NVM_IO_DONE;
+	}
+
+	rrqd->addr = gp;
+
+	return NVM_IO_OK;
+}
+
+static int rrpc_write_ppalist_rq(struct rrpc *rrpc, struct bio *bio,
+			struct nvm_rq *rqd, unsigned long flags, int npages)
+{
+	struct rrpc_inflight_rq *r = rrpc_get_inflight_rq(rqd);
+	struct rrpc_addr *p;
+	sector_t laddr = rrpc_get_laddr(bio);
+	int is_gc = flags & NVM_IOTYPE_GC;
+	int i;
+
+	if (!is_gc && rrpc_lock_rq(rrpc, bio, rqd)) {
+		nvm_free_ppalist(rrpc->dev, rqd->ppa_list, rqd->dma_ppa_list);
+		return NVM_IO_REQUEUE;
+	}
+
+	for (i = 0; i < npages; i++) {
+		/* We assume that mapping occurs at 4KB granularity */
+		p = rrpc_map_page(rrpc, laddr + i, is_gc);
+		if (!p) {
+			BUG_ON(is_gc);
+			rrpc_unlock_laddr(rrpc, r);
+			nvm_free_ppalist(rrpc->dev, rqd->ppa_list,
+							rqd->dma_ppa_list);
+			rrpc_gc_kick(rrpc);
+			return NVM_IO_REQUEUE;
+		}
+
+		rqd->ppa_list[i] = p->addr;
+	}
+
+	return NVM_IO_OK;
+}
+
+static int rrpc_write_rq(struct rrpc *rrpc, struct bio *bio,
+				struct nvm_rq *rqd, unsigned long flags)
+{
+	struct rrpc_rq *rrqd = nvm_rq_to_pdu(rqd);
+	struct rrpc_addr *p;
+	int is_gc = flags & NVM_IOTYPE_GC;
+	sector_t laddr = rrpc_get_laddr(bio);
+
+	if (!is_gc && rrpc_lock_rq(rrpc, bio, rqd))
+		return NVM_IO_REQUEUE;
+
+	p = rrpc_map_page(rrpc, laddr, is_gc);
+	if (!p) {
+		BUG_ON(is_gc);
+		rrpc_unlock_rq(rrpc, rqd);
+		rrpc_gc_kick(rrpc);
+		return NVM_IO_REQUEUE;
+	}
+
+	rqd->ppa = rrpc_get_sector(p->addr);
+	rrqd->addr = p;
+
+	return NVM_IO_OK;
+}
+
+static int rrpc_setup_rq(struct rrpc *rrpc, struct bio *bio,
+			struct nvm_rq *rqd, unsigned long flags, uint8_t npages)
+{
+	if (npages > 1) {
+		rqd->ppa_list = nvm_alloc_ppalist(rrpc->dev, GFP_KERNEL,
+							&rqd->dma_ppa_list);
+		if (!rqd->ppa_list) {
+			pr_err("rrpc: not able to allocate ppa list\n");
+			return NVM_IO_ERR;
+		}
+
+		if (bio_rw(bio) == WRITE)
+			return rrpc_write_ppalist_rq(rrpc, bio, rqd, flags,
+									npages);
+
+		return rrpc_read_ppalist_rq(rrpc, bio, rqd, flags, npages);
+	}
+
+	if (bio_rw(bio) == WRITE)
+		return rrpc_write_rq(rrpc, bio, rqd, flags);
+
+	return rrpc_read_rq(rrpc, bio, rqd, flags);
+}
+
+static int rrpc_submit_io(struct rrpc *rrpc, struct bio *bio,
+				struct nvm_rq *rqd, unsigned long flags)
+{
+	int err;
+	struct rrpc_rq *rrq = nvm_rq_to_pdu(rqd);
+	uint8_t npages = rrpc_get_pages(bio);
+
+	err = rrpc_setup_rq(rrpc, bio, rqd, flags, npages);
+	if (err)
+		return err;
+
+	bio_get(bio);
+	rqd->bio = bio;
+	rqd->ins = &rrpc->instance;
+	rqd->npages = npages;
+	rrq->flags = flags;
+
+	err = nvm_submit_io(rrpc->dev, rqd);
+	if (err) {
+		pr_err("rrpc: IO submission failed: %d\n", err);
+		return NVM_IO_ERR;
+	}
+
+	return NVM_IO_OK;
+}
+
+static void rrpc_make_rq(struct request_queue *q, struct bio *bio)
+{
+	struct rrpc *rrpc = q->queuedata;
+	struct nvm_rq *rqd;
+	int err;
+
+	if (bio->bi_rw & REQ_DISCARD) {
+		rrpc_discard(rrpc, bio);
+		return;
+	}
+
+	rqd = mempool_alloc(rrpc->rq_pool, GFP_KERNEL);
+	if (!rqd) {
+		pr_err_ratelimited("rrpc: not able to queue bio.");
+		bio_io_error(bio);
+		return;
+	}
+
+	err = rrpc_submit_io(rrpc, bio, rqd, NVM_IOTYPE_NONE);
+	switch (err) {
+	case NVM_IO_OK:
+		return;
+	case NVM_IO_ERR:
+		if (rqd->ppa_list)
+			nvm_free_ppalist(rrpc->dev, rqd->ppa_list,
+							rqd->dma_ppa_list);
+		bio_io_error(bio);
+		break;
+	case NVM_IO_DONE:
+		bio_endio(bio, 0);
+		break;
+	case NVM_IO_REQUEUE:
+		spin_lock(&rrpc->bio_lock);
+		bio_list_add(&rrpc->requeue_bios, bio);
+		spin_unlock(&rrpc->bio_lock);
+		queue_work(rrpc->kgc_wq, &rrpc->ws_requeue);
+		break;
+	}
+
+	mempool_free(rqd, rrpc->rq_pool);
+}
+
+static void rrpc_requeue(struct work_struct *work)
+{
+	struct rrpc *rrpc = container_of(work, struct rrpc, ws_requeue);
+	struct bio_list bios;
+	struct bio *bio;
+
+	bio_list_init(&bios);
+
+	spin_lock(&rrpc->bio_lock);
+	bio_list_merge(&bios, &rrpc->requeue_bios);
+	bio_list_init(&rrpc->requeue_bios);
+	spin_unlock(&rrpc->bio_lock);
+
+	while ((bio = bio_list_pop(&bios)))
+		rrpc_make_rq(rrpc->disk->queue, bio);
+}
+
+static void rrpc_gc_free(struct rrpc *rrpc)
+{
+	struct rrpc_lun *rlun;
+	int i;
+
+	if (rrpc->krqd_wq)
+		destroy_workqueue(rrpc->krqd_wq);
+
+	if (rrpc->kgc_wq)
+		destroy_workqueue(rrpc->kgc_wq);
+
+	if (!rrpc->luns)
+		return;
+
+	for (i = 0; i < rrpc->nr_luns; i++) {
+		rlun = &rrpc->luns[i];
+
+		if (!rlun->blocks)
+			break;
+		vfree(rlun->blocks);
+	}
+}
+
+static int rrpc_gc_init(struct rrpc *rrpc)
+{
+	rrpc->krqd_wq = alloc_workqueue("rrpc-lun", WQ_MEM_RECLAIM|WQ_UNBOUND,
+						rrpc->nr_luns);
+	if (!rrpc->krqd_wq)
+		return -ENOMEM;
+
+	rrpc->kgc_wq = alloc_workqueue("rrpc-bg", WQ_MEM_RECLAIM, 1);
+	if (!rrpc->kgc_wq)
+		return -ENOMEM;
+
+	setup_timer(&rrpc->gc_timer, rrpc_gc_timer, (unsigned long)rrpc);
+
+	return 0;
+}
+
+static void rrpc_map_free(struct rrpc *rrpc)
+{
+	vfree(rrpc->rev_trans_map);
+	vfree(rrpc->trans_map);
+}
+
+static int rrpc_l2p_update(u64 slba, u64 nlb, u64 *entries, void *private)
+{
+	struct rrpc *rrpc = (struct rrpc *)private;
+	struct nvm_dev *dev = rrpc->dev;
+	struct rrpc_addr *addr = rrpc->trans_map + slba;
+	struct rrpc_rev_addr *raddr = rrpc->rev_trans_map;
+	sector_t max_pages = dev->total_pages * (dev->sector_size >> 9);
+	u64 elba = slba + nlb;
+	u64 i;
+
+	if (unlikely(elba > dev->total_pages)) {
+		pr_err("nvm: L2P data from device is out of bounds!\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < nlb; i++) {
+		u64 pba = le64_to_cpu(entries[i]);
+		/* LNVM treats address-spaces as silos, LBA and PBA are
+		 * equally large and zero-indexed. */
+		if (unlikely(pba >= max_pages && pba != U64_MAX)) {
+			pr_err("nvm: L2P data entry is out of bounds!\n");
+			return -EINVAL;
+		}
+
+		/* Address zero is a special one. The first page on a disk is
+		 * protected. As it often holds internal device boot
+		 * information. */
+		if (!pba)
+			continue;
+
+		addr[i].addr = pba;
+		raddr[pba].addr = slba + i;
+	}
+
+	return 0;
+}
+
+static int rrpc_map_init(struct rrpc *rrpc)
+{
+	struct nvm_dev *dev = rrpc->dev;
+	sector_t i;
+	int ret;
+
+	rrpc->trans_map = vzalloc(sizeof(struct rrpc_addr) * rrpc->nr_pages);
+	if (!rrpc->trans_map)
+		return -ENOMEM;
+
+	rrpc->rev_trans_map = vmalloc(sizeof(struct rrpc_rev_addr)
+							* rrpc->nr_pages);
+	if (!rrpc->rev_trans_map)
+		return -ENOMEM;
+
+	for (i = 0; i < rrpc->nr_pages; i++) {
+		struct rrpc_addr *p = &rrpc->trans_map[i];
+		struct rrpc_rev_addr *r = &rrpc->rev_trans_map[i];
+
+		p->addr = ADDR_EMPTY;
+		r->addr = ADDR_EMPTY;
+	}
+
+	if (!dev->ops->get_l2p_tbl)
+		return 0;
+
+	/* Bring up the mapping table from device */
+	ret = dev->ops->get_l2p_tbl(dev->q, 0, dev->total_pages,
+							rrpc_l2p_update, rrpc);
+	if (ret) {
+		pr_err("nvm: rrpc: could not read L2P table.\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+
+/* Minimum pages needed within a lun */
+#define PAGE_POOL_SIZE 16
+#define ADDR_POOL_SIZE 64
+
+static int rrpc_core_init(struct rrpc *rrpc)
+{
+	down_write(&rrpc_lock);
+	if (!rrpc_gcb_cache) {
+		rrpc_gcb_cache = kmem_cache_create("rrpc_gcb",
+				sizeof(struct rrpc_block_gc), 0, 0, NULL);
+		if (!rrpc_gcb_cache) {
+			up_write(&rrpc_lock);
+			return -ENOMEM;
+		}
+
+		rrpc_rq_cache = kmem_cache_create("rrpc_rq",
+				sizeof(struct nvm_rq) + sizeof(struct rrpc_rq),
+				0, 0, NULL);
+		if (!rrpc_rq_cache) {
+			kmem_cache_destroy(rrpc_gcb_cache);
+			up_write(&rrpc_lock);
+			return -ENOMEM;
+		}
+	}
+	up_write(&rrpc_lock);
+
+	rrpc->page_pool = mempool_create_page_pool(PAGE_POOL_SIZE, 0);
+	if (!rrpc->page_pool)
+		return -ENOMEM;
+
+	rrpc->gcb_pool = mempool_create_slab_pool(rrpc->dev->nr_luns,
+								rrpc_gcb_cache);
+	if (!rrpc->gcb_pool)
+		return -ENOMEM;
+
+	rrpc->rq_pool = mempool_create_slab_pool(64, rrpc_rq_cache);
+	if (!rrpc->rq_pool)
+		return -ENOMEM;
+
+	spin_lock_init(&rrpc->inflights.lock);
+	INIT_LIST_HEAD(&rrpc->inflights.reqs);
+
+	return 0;
+}
+
+static void rrpc_core_free(struct rrpc *rrpc)
+{
+	if (rrpc->page_pool)
+		mempool_destroy(rrpc->page_pool);
+	if (rrpc->gcb_pool)
+		mempool_destroy(rrpc->gcb_pool);
+	if (rrpc->rq_pool)
+		mempool_destroy(rrpc->rq_pool);
+}
+
+static void rrpc_luns_free(struct rrpc *rrpc)
+{
+	kfree(rrpc->luns);
+}
+
+static int rrpc_luns_init(struct rrpc *rrpc, int lun_begin, int lun_end)
+{
+	struct nvm_dev *dev = rrpc->dev;
+	struct nvm_lun *luns;
+	struct rrpc_lun *rlun;
+	int i, j;
+
+	spin_lock_init(&rrpc->rev_lock);
+
+	luns = dev->bm->get_luns(dev, lun_begin, lun_end);
+	if (!luns)
+		return -EINVAL;
+
+	rrpc->luns = kcalloc(rrpc->nr_luns, sizeof(struct rrpc_lun),
+								GFP_KERNEL);
+	if (!rrpc->luns)
+		return -ENOMEM;
+
+	/* 1:1 mapping */
+	for (i = 0; i < rrpc->nr_luns; i++) {
+		struct nvm_lun *lun = &luns[i];
+
+		if (lun->nr_pages_per_blk >
+				MAX_INVALID_PAGES_STORAGE * BITS_PER_LONG) {
+			pr_err("rrpc: number of pages per block too high.");
+			goto err;
+		}
+
+		rlun = &rrpc->luns[i];
+		rlun->rrpc = rrpc;
+		rlun->parent = lun;
+		INIT_LIST_HEAD(&rlun->prio_list);
+		INIT_WORK(&rlun->ws_gc, rrpc_lun_gc);
+		spin_lock_init(&rlun->lock);
+
+		rrpc->total_blocks += lun->nr_blocks;
+		rrpc->nr_pages += lun->nr_blocks * lun->nr_pages_per_blk;
+
+		rlun->blocks = vzalloc(sizeof(struct rrpc_block) *
+						 lun->nr_blocks);
+		if (!rlun->blocks)
+			goto err;
+
+		for (j = 0; j < lun->nr_blocks; j++) {
+			struct rrpc_block *rblk = &rlun->blocks[j];
+			struct nvm_block *blk = &lun->blocks[j];
+
+			rblk->parent = blk;
+			INIT_LIST_HEAD(&rblk->prio);
+			spin_lock_init(&rblk->lock);
+		}
+	}
+
+	return 0;
+err:
+	return -ENOMEM;
+}
+
+static void rrpc_free(struct rrpc *rrpc)
+{
+	rrpc_gc_free(rrpc);
+	rrpc_map_free(rrpc);
+	rrpc_core_free(rrpc);
+	rrpc_luns_free(rrpc);
+
+	kfree(rrpc);
+}
+
+static void rrpc_exit(void *private)
+{
+	struct rrpc *rrpc = private;
+
+	del_timer(&rrpc->gc_timer);
+
+	flush_workqueue(rrpc->krqd_wq);
+	flush_workqueue(rrpc->kgc_wq);
+
+	rrpc_free(rrpc);
+}
+
+static sector_t rrpc_capacity(void *private)
+{
+	struct rrpc *rrpc = private;
+	struct nvm_dev *dev = rrpc->dev;
+	sector_t reserved;
+
+	/* cur, gc, and two emergency blocks for each lun */
+	reserved = rrpc->nr_luns * dev->max_pages_per_blk * 4;
+
+	if (reserved > rrpc->nr_pages) {
+		pr_err("rrpc: not enough space available to expose storage.\n");
+		return 0;
+	}
+
+	return ((rrpc->nr_pages - reserved) / 10) * 9 * NR_PHY_IN_LOG;
+}
+
+/*
+ * Looks up the logical address from reverse trans map and check if its valid by
+ * comparing the logical to physical address with the physical address.
+ * Returns 0 on free, otherwise 1 if in use
+ */
+static void rrpc_block_map_update(struct rrpc *rrpc, struct rrpc_block *rblk)
+{
+	struct nvm_lun *lun = rblk->parent->lun;
+	int offset;
+	struct rrpc_addr *laddr;
+	sector_t paddr, pladdr;
+
+	for (offset = 0; offset < lun->nr_pages_per_blk; offset++) {
+		paddr = block_to_addr(rblk) + offset;
+
+		pladdr = rrpc->rev_trans_map[paddr].addr;
+		if (pladdr == ADDR_EMPTY)
+			continue;
+
+		laddr = &rrpc->trans_map[pladdr];
+
+		if (paddr == laddr->addr) {
+			laddr->rblk = rblk;
+		} else {
+			set_bit(offset, rblk->invalid_pages);
+			rblk->nr_invalid_pages++;
+		}
+	}
+}
+
+static int rrpc_blocks_init(struct rrpc *rrpc)
+{
+	struct rrpc_lun *rlun;
+	struct rrpc_block *rblk;
+	int lun_iter, blk_iter;
+
+	for (lun_iter = 0; lun_iter < rrpc->nr_luns; lun_iter++) {
+		rlun = &rrpc->luns[lun_iter];
+
+		for (blk_iter = 0; blk_iter < rlun->parent->nr_blocks;
+								blk_iter++) {
+			rblk = &rlun->blocks[blk_iter];
+			rrpc_block_map_update(rrpc, rblk);
+		}
+	}
+
+	return 0;
+}
+
+static int rrpc_luns_configure(struct rrpc *rrpc)
+{
+	struct rrpc_lun *rlun;
+	struct rrpc_block *rblk;
+	int i;
+
+	for (i = 0; i < rrpc->nr_luns; i++) {
+		rlun = &rrpc->luns[i];
+
+		rblk = rrpc_get_blk(rrpc, rlun, 0);
+		if (!rblk)
+			return -EINVAL;
+
+		rrpc_set_lun_cur(rlun, rblk);
+
+		/* Emergency gc block */
+		rblk = rrpc_get_blk(rrpc, rlun, 1);
+		if (!rblk)
+			return -EINVAL;
+		rlun->gc_cur = rblk;
+	}
+
+	return 0;
+}
+
+static struct nvm_tgt_type tt_rrpc;
+
+static void *rrpc_init(struct nvm_dev *dev, struct gendisk *tdisk,
+						int lun_begin, int lun_end)
+{
+	struct request_queue *bqueue = dev->q;
+	struct request_queue *tqueue = tdisk->queue;
+	struct rrpc *rrpc;
+	int ret;
+
+	rrpc = kzalloc(sizeof(struct rrpc), GFP_KERNEL);
+	if (!rrpc) {
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	rrpc->instance.tt = &tt_rrpc;
+	rrpc->dev = dev;
+	rrpc->disk = tdisk;
+
+	bio_list_init(&rrpc->requeue_bios);
+	spin_lock_init(&rrpc->bio_lock);
+	INIT_WORK(&rrpc->ws_requeue, rrpc_requeue);
+
+	rrpc->nr_luns = lun_end - lun_begin + 1;
+
+	/* simple round-robin strategy */
+	atomic_set(&rrpc->next_lun, -1);
+
+	ret = rrpc_luns_init(rrpc, lun_begin, lun_end);
+	if (ret) {
+		pr_err("nvm: could not initialize luns\n");
+		goto err;
+	}
+
+	rrpc->poffset = rrpc->luns[0].parent->nr_blocks *
+			rrpc->luns[0].parent->nr_pages_per_blk * lun_begin;
+	rrpc->lun_offset = lun_begin;
+
+	ret = rrpc_core_init(rrpc);
+	if (ret) {
+		pr_err("nvm: rrpc: could not initialize core\n");
+		goto err;
+	}
+
+	ret = rrpc_map_init(rrpc);
+	if (ret) {
+		pr_err("nvm: rrpc: could not initialize maps\n");
+		goto err;
+	}
+
+	ret = rrpc_blocks_init(rrpc);
+	if (ret) {
+		pr_err("nvm: rrpc: could not initialize state for blocks\n");
+		goto err;
+	}
+
+	ret = rrpc_luns_configure(rrpc);
+	if (ret) {
+		pr_err("nvm: rrpc: not enough blocks available in LUNs.\n");
+		goto err;
+	}
+
+	ret = rrpc_gc_init(rrpc);
+	if (ret) {
+		pr_err("nvm: rrpc: could not initialize gc\n");
+		goto err;
+	}
+
+	/* inherit the size from the underlying device */
+	blk_queue_logical_block_size(tqueue, queue_physical_block_size(bqueue));
+	blk_queue_max_hw_sectors(tqueue, queue_max_hw_sectors(bqueue));
+
+	pr_info("nvm: rrpc initialized with %u luns and %llu pages.\n",
+			rrpc->nr_luns, (unsigned long long)rrpc->nr_pages);
+
+	mod_timer(&rrpc->gc_timer, jiffies + msecs_to_jiffies(10));
+
+	return rrpc;
+err:
+	rrpc_free(rrpc);
+	return ERR_PTR(ret);
+}
+
+/* round robin, page-based FTL, and cost-based GC */
+static struct nvm_tgt_type tt_rrpc = {
+	.name		= "rrpc",
+
+	.make_rq	= rrpc_make_rq,
+	.capacity	= rrpc_capacity,
+	.end_io		= rrpc_end_io,
+
+	.init		= rrpc_init,
+	.exit		= rrpc_exit,
+};
+
+static int __init rrpc_module_init(void)
+{
+	return nvm_register_target(&tt_rrpc);
+}
+
+static void rrpc_module_exit(void)
+{
+	nvm_unregister_target(&tt_rrpc);
+}
+
+module_init(rrpc_module_init);
+module_exit(rrpc_module_exit);
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Hybrid Target for Open-Channel SSDs");
diff --git a/drivers/lightnvm/rrpc.h b/drivers/lightnvm/rrpc.h
new file mode 100644
index 0000000..706ba0f
--- /dev/null
+++ b/drivers/lightnvm/rrpc.h
@@ -0,0 +1,236 @@
+/*
+ * Copyright (C) 2015 IT University of Copenhagen
+ * Initial release: Matias Bjorling <mabj at itu.dk>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Implementation of a Round-robin page-based Hybrid FTL for Open-channel SSDs.
+ */
+
+#ifndef RRPC_H_
+#define RRPC_H_
+
+#include <linux/blkdev.h>
+#include <linux/blk-mq.h>
+#include <linux/bio.h>
+#include <linux/module.h>
+#include <linux/kthread.h>
+#include <linux/vmalloc.h>
+
+#include <linux/lightnvm.h>
+
+/* Run only GC if less than 1/X blocks are free */
+#define GC_LIMIT_INVERSE 10
+#define GC_TIME_SECS 100
+
+#define RRPC_SECTOR (512)
+#define RRPC_EXPOSED_PAGE_SIZE (4096)
+
+#define NR_PHY_IN_LOG (RRPC_EXPOSED_PAGE_SIZE / RRPC_SECTOR)
+
+struct rrpc_inflight {
+	struct list_head reqs;
+	spinlock_t lock;
+};
+
+struct rrpc_inflight_rq {
+	struct list_head list;
+	sector_t l_start;
+	sector_t l_end;
+};
+
+struct rrpc_rq {
+	struct rrpc_inflight_rq inflight_rq;
+	struct rrpc_addr *addr;
+	unsigned long flags;
+};
+
+struct rrpc_block {
+	struct nvm_block *parent;
+	struct list_head prio;
+
+#define MAX_INVALID_PAGES_STORAGE 8
+	/* Bitmap for invalid page intries */
+	unsigned long invalid_pages[MAX_INVALID_PAGES_STORAGE];
+	/* points to the next writable page within a block */
+	unsigned int next_page;
+	/* number of pages that are invalid, wrt host page size */
+	unsigned int nr_invalid_pages;
+
+	spinlock_t lock;
+	atomic_t data_cmnt_size; /* data pages committed to stable storage */
+};
+
+struct rrpc_lun {
+	struct rrpc *rrpc;
+	struct nvm_lun *parent;
+	struct rrpc_block *cur, *gc_cur;
+	struct rrpc_block *blocks;	/* Reference to block allocation */
+	struct list_head prio_list;		/* Blocks that may be GC'ed */
+	struct work_struct ws_gc;
+
+	spinlock_t lock;
+};
+
+struct rrpc {
+	/* instance must be kept in top to resolve rrpc in unprep */
+	struct nvm_tgt_instance instance;
+
+	struct nvm_dev *dev;
+	struct gendisk *disk;
+
+	sector_t poffset; /* physical page offset */
+	int lun_offset;
+
+	int nr_luns;
+	struct rrpc_lun *luns;
+
+	/* calculated values */
+	unsigned long nr_pages;
+	unsigned long total_blocks;
+
+	/* Write strategy variables. Move these into each for structure for each
+	 * strategy */
+	atomic_t next_lun; /* Whenever a page is written, this is updated
+			    * to point to the next write lun */
+
+	spinlock_t bio_lock;
+	struct bio_list requeue_bios;
+	struct work_struct ws_requeue;
+
+	/* Simple translation map of logical addresses to physical addresses.
+	 * The logical addresses is known by the host system, while the physical
+	 * addresses are used when writing to the disk block device. */
+	struct rrpc_addr *trans_map;
+	/* also store a reverse map for garbage collection */
+	struct rrpc_rev_addr *rev_trans_map;
+	spinlock_t rev_lock;
+
+	struct rrpc_inflight inflights;
+
+	mempool_t *addr_pool;
+	mempool_t *page_pool;
+	mempool_t *gcb_pool;
+	mempool_t *rq_pool;
+
+	struct timer_list gc_timer;
+	struct workqueue_struct *krqd_wq;
+	struct workqueue_struct *kgc_wq;
+};
+
+struct rrpc_block_gc {
+	struct rrpc *rrpc;
+	struct rrpc_block *rblk;
+	struct work_struct ws_gc;
+};
+
+/* Logical to physical mapping */
+struct rrpc_addr {
+	sector_t addr;
+	struct rrpc_block *rblk;
+};
+
+/* Physical to logical mapping */
+struct rrpc_rev_addr {
+	sector_t addr;
+};
+
+static inline sector_t rrpc_get_laddr(struct bio *bio)
+{
+	return bio->bi_iter.bi_sector / NR_PHY_IN_LOG;
+}
+
+static inline unsigned int rrpc_get_pages(struct bio *bio)
+{
+	return  bio->bi_iter.bi_size / RRPC_EXPOSED_PAGE_SIZE;
+}
+
+static inline sector_t rrpc_get_sector(sector_t laddr)
+{
+	return laddr * NR_PHY_IN_LOG;
+}
+
+static inline int request_intersects(struct rrpc_inflight_rq *r,
+				sector_t laddr_start, sector_t laddr_end)
+{
+	return (laddr_end >= r->l_start && laddr_end <= r->l_end) &&
+		(laddr_start >= r->l_start && laddr_start <= r->l_end);
+}
+
+static int __rrpc_lock_laddr(struct rrpc *rrpc, sector_t laddr,
+			     unsigned pages, struct rrpc_inflight_rq *r)
+{
+	sector_t laddr_end = laddr + pages - 1;
+	struct rrpc_inflight_rq *rtmp;
+
+	spin_lock_irq(&rrpc->inflights.lock);
+	list_for_each_entry(rtmp, &rrpc->inflights.reqs, list) {
+		if (unlikely(request_intersects(rtmp, laddr, laddr_end))) {
+			/* existing, overlapping request, come back later */
+			spin_unlock_irq(&rrpc->inflights.lock);
+			return 1;
+		}
+	}
+
+	r->l_start = laddr;
+	r->l_end = laddr_end;
+
+	list_add_tail(&r->list, &rrpc->inflights.reqs);
+	spin_unlock_irq(&rrpc->inflights.lock);
+	return 0;
+}
+
+static inline int rrpc_lock_laddr(struct rrpc *rrpc, sector_t laddr,
+				 unsigned pages,
+				 struct rrpc_inflight_rq *r)
+{
+	BUG_ON((laddr + pages) > rrpc->nr_pages);
+
+	return __rrpc_lock_laddr(rrpc, laddr, pages, r);
+}
+
+static inline struct rrpc_inflight_rq *rrpc_get_inflight_rq(struct nvm_rq *rqd)
+{
+	struct rrpc_rq *rrqd = nvm_rq_to_pdu(rqd);
+
+	return &rrqd->inflight_rq;
+}
+
+static inline int rrpc_lock_rq(struct rrpc *rrpc, struct bio *bio,
+							struct nvm_rq *rqd)
+{
+	sector_t laddr = rrpc_get_laddr(bio);
+	unsigned int pages = rrpc_get_pages(bio);
+	struct rrpc_inflight_rq *r = rrpc_get_inflight_rq(rqd);
+
+	return rrpc_lock_laddr(rrpc, laddr, pages, r);
+}
+
+static inline void rrpc_unlock_laddr(struct rrpc *rrpc,
+						struct rrpc_inflight_rq *r)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&rrpc->inflights.lock, flags);
+	list_del_init(&r->list);
+	spin_unlock_irqrestore(&rrpc->inflights.lock, flags);
+}
+
+static inline void rrpc_unlock_rq(struct rrpc *rrpc, struct nvm_rq *rqd)
+{
+	struct rrpc_inflight_rq *r = rrpc_get_inflight_rq(rqd);
+	uint8_t pages = rqd->npages;
+
+	BUG_ON((r->l_start + pages) > rrpc->nr_pages);
+
+	rrpc_unlock_laddr(rrpc, r);
+}
+
+#endif /* RRPC_H_ */
diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
index 9654354..0ac73d5 100644
--- a/include/linux/lightnvm.h
+++ b/include/linux/lightnvm.h
@@ -135,6 +135,7 @@ struct nvm_dev_ops {
 	nvm_alloc_ppalist_fn	*alloc_ppalist;
 	nvm_free_ppalist_fn	*free_ppalist;
 
+	int			dev_sector_size;
 	uint8_t			max_phys_sect;
 };
 
@@ -286,10 +287,8 @@ extern int nvm_submit_io(struct nvm_dev *, struct nvm_rq *);
  * bytes chunks. This should be set to the smallest command size available for a
  * given device.
  */
-#define NVM_SECTOR (512)
-#define EXPOSED_PAGE_SIZE (4096)
 
-#define NR_PHY_IN_LOG (EXPOSED_PAGE_SIZE / NVM_SECTOR)
+#define DEV_EXPOSED_PAGE_SIZE (4096)
 
 #define NVM_MSG_PREFIX "nvm"
 #define ADDR_EMPTY (~0ULL)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v7 3/5] lightnvm: Hybrid Open-Channel SSD block manager
  2015-08-07 14:29 ` Matias Bjørling
@ 2015-08-07 14:29   ` Matias Bjørling
  -1 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-08-07 14:29 UTC (permalink / raw)
  To: hch, axboe, linux-fsdevel, linux-kernel, linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

The host implementation for Open-Channel SSDs is divided into block
management and targets. This patch implements the block manager for
hybrid open-channel SSDs. On top a target, such as rrpc is initialized.

Signed-off-by: Matias Bjørling <mb@lightnvm.io>
---
 drivers/lightnvm/Kconfig  |   7 +
 drivers/lightnvm/Makefile |   1 +
 drivers/lightnvm/bm_hb.c  | 366 ++++++++++++++++++++++++++++++++++++++++++++++
 drivers/lightnvm/bm_hb.h  |  46 ++++++
 4 files changed, 420 insertions(+)
 create mode 100644 drivers/lightnvm/bm_hb.c
 create mode 100644 drivers/lightnvm/bm_hb.h

diff --git a/drivers/lightnvm/Kconfig b/drivers/lightnvm/Kconfig
index ab1fe57..37b00ae 100644
--- a/drivers/lightnvm/Kconfig
+++ b/drivers/lightnvm/Kconfig
@@ -23,4 +23,11 @@ config NVM_RRPC
 	host. The target is implemented using a linear mapping table and
 	cost-based garbage collection. It is optimized for 4K IO sizes.
 
+config NVM_BM_HB
+	tristate "Block manager for Hybrid Open-Channel SSD"
+	---help---
+	Block manager for SSDs that offload block management off to the device,
+	while keeping data placement and garbage collection decisions on the
+	host.
+
 endif # NVM
diff --git a/drivers/lightnvm/Makefile b/drivers/lightnvm/Makefile
index b2a39e2..9ff4669 100644
--- a/drivers/lightnvm/Makefile
+++ b/drivers/lightnvm/Makefile
@@ -4,3 +4,4 @@
 
 obj-$(CONFIG_NVM)		:= core.o
 obj-$(CONFIG_NVM_RRPC)		+= rrpc.o
+obj-$(CONFIG_NVM_BM_HB) 	+= bm_hb.o
diff --git a/drivers/lightnvm/bm_hb.c b/drivers/lightnvm/bm_hb.c
new file mode 100644
index 0000000..dff64c1
--- /dev/null
+++ b/drivers/lightnvm/bm_hb.c
@@ -0,0 +1,366 @@
+/*
+ * Copyright: Matias Bjorling <mb@lightnvm.io>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Implementation of a block manager for hybrid open-channel SSD.
+ */
+
+#include "bm_hb.h"
+
+static void hb_blocks_free(struct nvm_dev *dev)
+{
+	struct bm_hb *bm = dev->bmp;
+	struct bm_lun *lun;
+	int i;
+
+	bm_for_each_lun(bm, lun, i) {
+		if (!lun->vlun.blocks)
+			break;
+		vfree(lun->vlun.blocks);
+	}
+}
+
+static void hb_luns_free(struct nvm_dev *dev)
+{
+	struct bm_hb *bm = dev->bmp;
+
+	kfree(bm->luns);
+}
+
+static int hb_luns_init(struct nvm_dev *dev, struct bm_hb *bm)
+{
+	struct bm_lun *lun;
+	struct nvm_id_chnl *chnl;
+	int i;
+
+	bm->luns = kcalloc(bm->nr_luns, sizeof(struct bm_lun), GFP_KERNEL);
+	if (!bm->luns)
+		return -ENOMEM;
+
+	bm_for_each_lun(bm, lun, i) {
+		chnl = &dev->identity.chnls[i];
+		pr_info("bm_hb: p %u qsize %u gr %u ge %u begin %llu end %llu\n",
+			i, chnl->queue_size, chnl->gran_read, chnl->gran_erase,
+			chnl->laddr_begin, chnl->laddr_end);
+
+		spin_lock_init(&lun->vlun.lock);
+
+		INIT_LIST_HEAD(&lun->free_list);
+		INIT_LIST_HEAD(&lun->used_list);
+		INIT_LIST_HEAD(&lun->bb_list);
+
+		lun->vlun.id = i;
+		lun->chnl = chnl;
+		lun->reserved_blocks = 2; /* for GC only */
+		lun->vlun.nr_blocks =
+				(chnl->laddr_end - chnl->laddr_begin + 1) /
+				(chnl->gran_erase / chnl->gran_read);
+		lun->vlun.nr_free_blocks = lun->vlun.nr_blocks;
+		lun->vlun.nr_pages_per_blk =
+				chnl->gran_erase / chnl->gran_write *
+					(chnl->gran_write / dev->sector_size);
+
+		if (lun->vlun.nr_pages_per_blk > dev->max_pages_per_blk)
+			dev->max_pages_per_blk = lun->vlun.nr_pages_per_blk;
+
+		dev->total_pages += lun->vlun.nr_blocks *
+						lun->vlun.nr_pages_per_blk;
+		dev->total_blocks += lun->vlun.nr_blocks;
+	}
+
+	return 0;
+}
+
+static int hb_block_bb(u32 lun_id, void *bb_bitmap, unsigned int nr_blocks,
+								void *private)
+{
+	struct bm_hb *bm = private;
+	struct bm_lun *lun = &bm->luns[lun_id];
+	struct nvm_block *block;
+	int i;
+
+	if (unlikely(bitmap_empty(bb_bitmap, nr_blocks)))
+		return 0;
+
+	i = -1;
+	while ((i = find_next_bit(bb_bitmap, nr_blocks, i + 1)) <
+			nr_blocks) {
+		block = &lun->vlun.blocks[i];
+		if (!block) {
+			pr_err("bm_hb: BB data is out of bounds!\n");
+			return -EINVAL;
+		}
+		list_move_tail(&block->list, &lun->bb_list);
+	}
+
+	return 0;
+}
+
+static int hb_block_map(u64 slba, u64 nlb, u64 *entries, void *private)
+{
+	struct nvm_dev *dev = private;
+	struct bm_hb *bm = dev->bmp;
+	sector_t max_pages = dev->total_pages * (dev->sector_size >> 9);
+	u64 elba = slba + nlb;
+	struct bm_lun *lun;
+	struct nvm_block *blk;
+	sector_t total_pgs_per_lun = /* each lun have the same configuration */
+		 bm->luns[0].vlun.nr_blocks * bm->luns[0].vlun.nr_pages_per_blk;
+	u64 i;
+	int lun_id;
+
+	if (unlikely(elba > dev->total_pages)) {
+		pr_err("bm_hb: L2P data from device is out of bounds!\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < nlb; i++) {
+		u64 pba = le64_to_cpu(entries[i]);
+
+		if (unlikely(pba >= max_pages && pba != U64_MAX)) {
+			pr_err("bm_hb: L2P data entry is out of bounds!\n");
+			return -EINVAL;
+		}
+
+		/* Address zero is a special one. The first page on a disk is
+		 * protected. As it often holds internal device boot
+		 * information. */
+		if (!pba)
+			continue;
+
+		/* resolve block from physical address */
+		lun_id = pba / total_pgs_per_lun;
+		lun = &bm->luns[lun_id];
+
+		/* Calculate block offset into lun */
+		pba = pba - (total_pgs_per_lun * lun_id);
+		blk = &lun->vlun.blocks[pba / lun->vlun.nr_pages_per_blk];
+
+		if (!blk->type) {
+			/* at this point, we don't know anything about the
+			 * block. It's up to the FTL on top to re-etablish the
+			 * block state */
+			list_move_tail(&blk->list, &lun->used_list);
+			blk->type = 1;
+			lun->vlun.nr_free_blocks--;
+		}
+	}
+
+	return 0;
+}
+
+static int hb_blocks_init(struct nvm_dev *dev, struct bm_hb *bm)
+{
+	struct bm_lun *lun;
+	struct nvm_block *block;
+	sector_t lun_iter, blk_iter, cur_block_id = 0;
+	int ret;
+
+	bm_for_each_lun(bm, lun, lun_iter) {
+		lun->vlun.blocks = vzalloc(sizeof(struct nvm_block) *
+						lun->vlun.nr_blocks);
+		if (!lun->vlun.blocks)
+			return -ENOMEM;
+
+		for (blk_iter = 0; blk_iter < lun->vlun.nr_blocks; blk_iter++) {
+			block = &lun->vlun.blocks[blk_iter];
+
+			INIT_LIST_HEAD(&block->list);
+
+			block->lun = &lun->vlun;
+			block->id = cur_block_id++;
+
+			/* First block is reserved for device */
+			if (unlikely(lun_iter == 0 && blk_iter == 0))
+				continue;
+
+			list_add_tail(&block->list, &lun->free_list);
+		}
+
+		if (dev->ops->get_bb_tbl) {
+			ret = dev->ops->get_bb_tbl(dev->q, lun->vlun.id,
+			lun->vlun.nr_blocks, hb_block_bb, bm);
+			if (ret)
+				pr_err("bm_hb: could not read BB table\n");
+		}
+	}
+
+	if (dev->ops->get_l2p_tbl) {
+		ret = dev->ops->get_l2p_tbl(dev->q, 0, dev->total_pages,
+							hb_block_map, dev);
+		if (ret) {
+			pr_err("bm_hb: could not read L2P table.\n");
+			pr_warn("bm_hb: default block initialization");
+		}
+	}
+
+	return 0;
+}
+
+static int hb_register(struct nvm_dev *dev)
+{
+	struct bm_hb *bm;
+	int ret;
+
+	if (!dev->features.rsp & NVM_RSP_L2P)
+		return 0;
+
+	bm = kzalloc(sizeof(struct bm_hb), GFP_KERNEL);
+	if (!bm)
+		return -ENOMEM;
+
+	bm->nr_luns = dev->nr_luns;
+	dev->bmp = bm;
+
+	ret = hb_luns_init(dev, bm);
+	if (ret) {
+		pr_err("bm_hb: could not initialize luns\n");
+		goto err;
+	}
+
+	ret = hb_blocks_init(dev, bm);
+	if (ret) {
+		pr_err("bm_hb: could not initialize blocks\n");
+		goto err;
+	}
+
+	return 1;
+err:
+	kfree(bm);
+	return ret;
+}
+
+static void hb_unregister(struct nvm_dev *dev)
+{
+	hb_blocks_free(dev);
+	hb_luns_free(dev);
+	kfree(dev->bmp);
+	dev->bmp = NULL;
+}
+
+static struct nvm_block *hb_get_blk(struct nvm_dev *dev, struct nvm_lun *vlun,
+							unsigned long flags)
+{
+	struct bm_lun *lun = container_of(vlun, struct bm_lun, vlun);
+	struct nvm_block *blk = NULL;
+	int is_gc = flags & NVM_IOTYPE_GC;
+
+	BUG_ON(!lun);
+
+	spin_lock(&vlun->lock);
+
+	if (list_empty(&lun->free_list)) {
+		pr_err_ratelimited("bm_hb: lun %u have no free pages available",
+								lun->vlun.id);
+		spin_unlock(&vlun->lock);
+		goto out;
+	}
+
+	while (!is_gc && lun->vlun.nr_free_blocks < lun->reserved_blocks) {
+		spin_unlock(&vlun->lock);
+		goto out;
+	}
+
+	blk = list_first_entry(&lun->free_list, struct nvm_block, list);
+	list_move_tail(&blk->list, &lun->used_list);
+
+	lun->vlun.nr_free_blocks--;
+
+	spin_unlock(&vlun->lock);
+out:
+	return blk;
+}
+
+static void hb_put_blk(struct nvm_dev *dev, struct nvm_block *blk)
+{
+	struct nvm_lun *vlun = blk->lun;
+	struct bm_lun *lun = container_of(vlun, struct bm_lun, vlun);
+
+	spin_lock(&vlun->lock);
+
+	list_move_tail(&blk->list, &lun->free_list);
+	lun->vlun.nr_free_blocks++;
+
+	spin_unlock(&vlun->lock);
+}
+
+static int hb_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd)
+{
+	if (!dev->ops->submit_io)
+		return 0;
+
+	return dev->ops->submit_io(dev->q, rqd);
+}
+
+static void hb_end_io(struct nvm_rq *rqd, int error)
+{
+	struct nvm_tgt_instance *ins = rqd->ins;
+
+	ins->tt->end_io(rqd, error);
+}
+
+static int hb_erase_blk(struct nvm_dev *dev, struct nvm_block *blk)
+{
+	if (!dev->ops->erase_block)
+		return 0;
+
+	return dev->ops->erase_block(dev->q, blk->id);
+}
+
+static struct nvm_lun *hb_get_luns(struct nvm_dev *dev, int begin, int end)
+{
+	struct bm_hb *bm = dev->bmp;
+
+	return &bm->luns[begin].vlun;
+}
+
+static void hb_free_blocks_print(struct nvm_dev *dev)
+{
+	struct bm_hb *bm = dev->bmp;
+	struct bm_lun *lun;
+	unsigned int i;
+
+	bm_for_each_lun(bm, lun, i)
+		pr_info("%s: lun%8u\t%u\n",
+					dev->name, i, lun->vlun.nr_free_blocks);
+}
+
+static struct nvm_bm_type bm_hb = {
+	.name		= "hb",
+
+	.register_bm	= hb_register,
+	.unregister_bm	= hb_unregister,
+
+	.get_blk	= hb_get_blk,
+	.put_blk	= hb_put_blk,
+
+	.submit_io	= hb_submit_io,
+	.end_io		= hb_end_io,
+	.erase_blk	= hb_erase_blk,
+
+	.get_luns	= hb_get_luns,
+	.free_blocks_print = hb_free_blocks_print,
+};
+
+static int __init hb_module_init(void)
+{
+	return nvm_register_bm(&bm_hb);
+}
+
+static void hb_module_exit(void)
+{
+	nvm_unregister_bm(&bm_hb);
+}
+
+module_init(hb_module_init);
+module_exit(hb_module_exit);
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Block manager for Hybrid Open-Channel SSDs");
diff --git a/drivers/lightnvm/bm_hb.h b/drivers/lightnvm/bm_hb.h
new file mode 100644
index 0000000..b856a70
--- /dev/null
+++ b/drivers/lightnvm/bm_hb.h
@@ -0,0 +1,46 @@
+/*
+ * Copyright: Matias Bjorling <mb@lightnvm.io>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ */
+
+#ifndef BM_HB_H_
+#define BM_HB_H_
+
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+
+#include <linux/lightnvm.h>
+
+struct bm_lun {
+	struct nvm_lun vlun;
+
+	int reserved_blocks;
+	/* lun block lists */
+	struct list_head used_list;	/* In-use blocks */
+	struct list_head free_list;	/* Not used blocks i.e. released
+					 * and ready for use */
+	struct list_head bb_list;	/* Bad blocks. Mutually exclusive with
+					   free_list and used_list */
+
+	struct nvm_id_chnl *chnl;
+};
+
+struct bm_hb {
+	int nr_luns;
+	struct bm_lun *luns;
+};
+
+#define bm_for_each_lun(bm, lun, i) \
+		for ((i) = 0, lun = &(bm)->luns[0]; \
+			(i) < (bm)->nr_luns; (i)++, lun = &(bm)->luns[(i)])
+
+#endif /* BM_HB_H_ */
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v7 3/5] lightnvm: Hybrid Open-Channel SSD block manager
@ 2015-08-07 14:29   ` Matias Bjørling
  0 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-08-07 14:29 UTC (permalink / raw)


The host implementation for Open-Channel SSDs is divided into block
management and targets. This patch implements the block manager for
hybrid open-channel SSDs. On top a target, such as rrpc is initialized.

Signed-off-by: Matias Bj?rling <mb at lightnvm.io>
---
 drivers/lightnvm/Kconfig  |   7 +
 drivers/lightnvm/Makefile |   1 +
 drivers/lightnvm/bm_hb.c  | 366 ++++++++++++++++++++++++++++++++++++++++++++++
 drivers/lightnvm/bm_hb.h  |  46 ++++++
 4 files changed, 420 insertions(+)
 create mode 100644 drivers/lightnvm/bm_hb.c
 create mode 100644 drivers/lightnvm/bm_hb.h

diff --git a/drivers/lightnvm/Kconfig b/drivers/lightnvm/Kconfig
index ab1fe57..37b00ae 100644
--- a/drivers/lightnvm/Kconfig
+++ b/drivers/lightnvm/Kconfig
@@ -23,4 +23,11 @@ config NVM_RRPC
 	host. The target is implemented using a linear mapping table and
 	cost-based garbage collection. It is optimized for 4K IO sizes.
 
+config NVM_BM_HB
+	tristate "Block manager for Hybrid Open-Channel SSD"
+	---help---
+	Block manager for SSDs that offload block management off to the device,
+	while keeping data placement and garbage collection decisions on the
+	host.
+
 endif # NVM
diff --git a/drivers/lightnvm/Makefile b/drivers/lightnvm/Makefile
index b2a39e2..9ff4669 100644
--- a/drivers/lightnvm/Makefile
+++ b/drivers/lightnvm/Makefile
@@ -4,3 +4,4 @@
 
 obj-$(CONFIG_NVM)		:= core.o
 obj-$(CONFIG_NVM_RRPC)		+= rrpc.o
+obj-$(CONFIG_NVM_BM_HB) 	+= bm_hb.o
diff --git a/drivers/lightnvm/bm_hb.c b/drivers/lightnvm/bm_hb.c
new file mode 100644
index 0000000..dff64c1
--- /dev/null
+++ b/drivers/lightnvm/bm_hb.c
@@ -0,0 +1,366 @@
+/*
+ * Copyright: Matias Bjorling <mb at lightnvm.io>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Implementation of a block manager for hybrid open-channel SSD.
+ */
+
+#include "bm_hb.h"
+
+static void hb_blocks_free(struct nvm_dev *dev)
+{
+	struct bm_hb *bm = dev->bmp;
+	struct bm_lun *lun;
+	int i;
+
+	bm_for_each_lun(bm, lun, i) {
+		if (!lun->vlun.blocks)
+			break;
+		vfree(lun->vlun.blocks);
+	}
+}
+
+static void hb_luns_free(struct nvm_dev *dev)
+{
+	struct bm_hb *bm = dev->bmp;
+
+	kfree(bm->luns);
+}
+
+static int hb_luns_init(struct nvm_dev *dev, struct bm_hb *bm)
+{
+	struct bm_lun *lun;
+	struct nvm_id_chnl *chnl;
+	int i;
+
+	bm->luns = kcalloc(bm->nr_luns, sizeof(struct bm_lun), GFP_KERNEL);
+	if (!bm->luns)
+		return -ENOMEM;
+
+	bm_for_each_lun(bm, lun, i) {
+		chnl = &dev->identity.chnls[i];
+		pr_info("bm_hb: p %u qsize %u gr %u ge %u begin %llu end %llu\n",
+			i, chnl->queue_size, chnl->gran_read, chnl->gran_erase,
+			chnl->laddr_begin, chnl->laddr_end);
+
+		spin_lock_init(&lun->vlun.lock);
+
+		INIT_LIST_HEAD(&lun->free_list);
+		INIT_LIST_HEAD(&lun->used_list);
+		INIT_LIST_HEAD(&lun->bb_list);
+
+		lun->vlun.id = i;
+		lun->chnl = chnl;
+		lun->reserved_blocks = 2; /* for GC only */
+		lun->vlun.nr_blocks =
+				(chnl->laddr_end - chnl->laddr_begin + 1) /
+				(chnl->gran_erase / chnl->gran_read);
+		lun->vlun.nr_free_blocks = lun->vlun.nr_blocks;
+		lun->vlun.nr_pages_per_blk =
+				chnl->gran_erase / chnl->gran_write *
+					(chnl->gran_write / dev->sector_size);
+
+		if (lun->vlun.nr_pages_per_blk > dev->max_pages_per_blk)
+			dev->max_pages_per_blk = lun->vlun.nr_pages_per_blk;
+
+		dev->total_pages += lun->vlun.nr_blocks *
+						lun->vlun.nr_pages_per_blk;
+		dev->total_blocks += lun->vlun.nr_blocks;
+	}
+
+	return 0;
+}
+
+static int hb_block_bb(u32 lun_id, void *bb_bitmap, unsigned int nr_blocks,
+								void *private)
+{
+	struct bm_hb *bm = private;
+	struct bm_lun *lun = &bm->luns[lun_id];
+	struct nvm_block *block;
+	int i;
+
+	if (unlikely(bitmap_empty(bb_bitmap, nr_blocks)))
+		return 0;
+
+	i = -1;
+	while ((i = find_next_bit(bb_bitmap, nr_blocks, i + 1)) <
+			nr_blocks) {
+		block = &lun->vlun.blocks[i];
+		if (!block) {
+			pr_err("bm_hb: BB data is out of bounds!\n");
+			return -EINVAL;
+		}
+		list_move_tail(&block->list, &lun->bb_list);
+	}
+
+	return 0;
+}
+
+static int hb_block_map(u64 slba, u64 nlb, u64 *entries, void *private)
+{
+	struct nvm_dev *dev = private;
+	struct bm_hb *bm = dev->bmp;
+	sector_t max_pages = dev->total_pages * (dev->sector_size >> 9);
+	u64 elba = slba + nlb;
+	struct bm_lun *lun;
+	struct nvm_block *blk;
+	sector_t total_pgs_per_lun = /* each lun have the same configuration */
+		 bm->luns[0].vlun.nr_blocks * bm->luns[0].vlun.nr_pages_per_blk;
+	u64 i;
+	int lun_id;
+
+	if (unlikely(elba > dev->total_pages)) {
+		pr_err("bm_hb: L2P data from device is out of bounds!\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < nlb; i++) {
+		u64 pba = le64_to_cpu(entries[i]);
+
+		if (unlikely(pba >= max_pages && pba != U64_MAX)) {
+			pr_err("bm_hb: L2P data entry is out of bounds!\n");
+			return -EINVAL;
+		}
+
+		/* Address zero is a special one. The first page on a disk is
+		 * protected. As it often holds internal device boot
+		 * information. */
+		if (!pba)
+			continue;
+
+		/* resolve block from physical address */
+		lun_id = pba / total_pgs_per_lun;
+		lun = &bm->luns[lun_id];
+
+		/* Calculate block offset into lun */
+		pba = pba - (total_pgs_per_lun * lun_id);
+		blk = &lun->vlun.blocks[pba / lun->vlun.nr_pages_per_blk];
+
+		if (!blk->type) {
+			/* at this point, we don't know anything about the
+			 * block. It's up to the FTL on top to re-etablish the
+			 * block state */
+			list_move_tail(&blk->list, &lun->used_list);
+			blk->type = 1;
+			lun->vlun.nr_free_blocks--;
+		}
+	}
+
+	return 0;
+}
+
+static int hb_blocks_init(struct nvm_dev *dev, struct bm_hb *bm)
+{
+	struct bm_lun *lun;
+	struct nvm_block *block;
+	sector_t lun_iter, blk_iter, cur_block_id = 0;
+	int ret;
+
+	bm_for_each_lun(bm, lun, lun_iter) {
+		lun->vlun.blocks = vzalloc(sizeof(struct nvm_block) *
+						lun->vlun.nr_blocks);
+		if (!lun->vlun.blocks)
+			return -ENOMEM;
+
+		for (blk_iter = 0; blk_iter < lun->vlun.nr_blocks; blk_iter++) {
+			block = &lun->vlun.blocks[blk_iter];
+
+			INIT_LIST_HEAD(&block->list);
+
+			block->lun = &lun->vlun;
+			block->id = cur_block_id++;
+
+			/* First block is reserved for device */
+			if (unlikely(lun_iter == 0 && blk_iter == 0))
+				continue;
+
+			list_add_tail(&block->list, &lun->free_list);
+		}
+
+		if (dev->ops->get_bb_tbl) {
+			ret = dev->ops->get_bb_tbl(dev->q, lun->vlun.id,
+			lun->vlun.nr_blocks, hb_block_bb, bm);
+			if (ret)
+				pr_err("bm_hb: could not read BB table\n");
+		}
+	}
+
+	if (dev->ops->get_l2p_tbl) {
+		ret = dev->ops->get_l2p_tbl(dev->q, 0, dev->total_pages,
+							hb_block_map, dev);
+		if (ret) {
+			pr_err("bm_hb: could not read L2P table.\n");
+			pr_warn("bm_hb: default block initialization");
+		}
+	}
+
+	return 0;
+}
+
+static int hb_register(struct nvm_dev *dev)
+{
+	struct bm_hb *bm;
+	int ret;
+
+	if (!dev->features.rsp & NVM_RSP_L2P)
+		return 0;
+
+	bm = kzalloc(sizeof(struct bm_hb), GFP_KERNEL);
+	if (!bm)
+		return -ENOMEM;
+
+	bm->nr_luns = dev->nr_luns;
+	dev->bmp = bm;
+
+	ret = hb_luns_init(dev, bm);
+	if (ret) {
+		pr_err("bm_hb: could not initialize luns\n");
+		goto err;
+	}
+
+	ret = hb_blocks_init(dev, bm);
+	if (ret) {
+		pr_err("bm_hb: could not initialize blocks\n");
+		goto err;
+	}
+
+	return 1;
+err:
+	kfree(bm);
+	return ret;
+}
+
+static void hb_unregister(struct nvm_dev *dev)
+{
+	hb_blocks_free(dev);
+	hb_luns_free(dev);
+	kfree(dev->bmp);
+	dev->bmp = NULL;
+}
+
+static struct nvm_block *hb_get_blk(struct nvm_dev *dev, struct nvm_lun *vlun,
+							unsigned long flags)
+{
+	struct bm_lun *lun = container_of(vlun, struct bm_lun, vlun);
+	struct nvm_block *blk = NULL;
+	int is_gc = flags & NVM_IOTYPE_GC;
+
+	BUG_ON(!lun);
+
+	spin_lock(&vlun->lock);
+
+	if (list_empty(&lun->free_list)) {
+		pr_err_ratelimited("bm_hb: lun %u have no free pages available",
+								lun->vlun.id);
+		spin_unlock(&vlun->lock);
+		goto out;
+	}
+
+	while (!is_gc && lun->vlun.nr_free_blocks < lun->reserved_blocks) {
+		spin_unlock(&vlun->lock);
+		goto out;
+	}
+
+	blk = list_first_entry(&lun->free_list, struct nvm_block, list);
+	list_move_tail(&blk->list, &lun->used_list);
+
+	lun->vlun.nr_free_blocks--;
+
+	spin_unlock(&vlun->lock);
+out:
+	return blk;
+}
+
+static void hb_put_blk(struct nvm_dev *dev, struct nvm_block *blk)
+{
+	struct nvm_lun *vlun = blk->lun;
+	struct bm_lun *lun = container_of(vlun, struct bm_lun, vlun);
+
+	spin_lock(&vlun->lock);
+
+	list_move_tail(&blk->list, &lun->free_list);
+	lun->vlun.nr_free_blocks++;
+
+	spin_unlock(&vlun->lock);
+}
+
+static int hb_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd)
+{
+	if (!dev->ops->submit_io)
+		return 0;
+
+	return dev->ops->submit_io(dev->q, rqd);
+}
+
+static void hb_end_io(struct nvm_rq *rqd, int error)
+{
+	struct nvm_tgt_instance *ins = rqd->ins;
+
+	ins->tt->end_io(rqd, error);
+}
+
+static int hb_erase_blk(struct nvm_dev *dev, struct nvm_block *blk)
+{
+	if (!dev->ops->erase_block)
+		return 0;
+
+	return dev->ops->erase_block(dev->q, blk->id);
+}
+
+static struct nvm_lun *hb_get_luns(struct nvm_dev *dev, int begin, int end)
+{
+	struct bm_hb *bm = dev->bmp;
+
+	return &bm->luns[begin].vlun;
+}
+
+static void hb_free_blocks_print(struct nvm_dev *dev)
+{
+	struct bm_hb *bm = dev->bmp;
+	struct bm_lun *lun;
+	unsigned int i;
+
+	bm_for_each_lun(bm, lun, i)
+		pr_info("%s: lun%8u\t%u\n",
+					dev->name, i, lun->vlun.nr_free_blocks);
+}
+
+static struct nvm_bm_type bm_hb = {
+	.name		= "hb",
+
+	.register_bm	= hb_register,
+	.unregister_bm	= hb_unregister,
+
+	.get_blk	= hb_get_blk,
+	.put_blk	= hb_put_blk,
+
+	.submit_io	= hb_submit_io,
+	.end_io		= hb_end_io,
+	.erase_blk	= hb_erase_blk,
+
+	.get_luns	= hb_get_luns,
+	.free_blocks_print = hb_free_blocks_print,
+};
+
+static int __init hb_module_init(void)
+{
+	return nvm_register_bm(&bm_hb);
+}
+
+static void hb_module_exit(void)
+{
+	nvm_unregister_bm(&bm_hb);
+}
+
+module_init(hb_module_init);
+module_exit(hb_module_exit);
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Block manager for Hybrid Open-Channel SSDs");
diff --git a/drivers/lightnvm/bm_hb.h b/drivers/lightnvm/bm_hb.h
new file mode 100644
index 0000000..b856a70
--- /dev/null
+++ b/drivers/lightnvm/bm_hb.h
@@ -0,0 +1,46 @@
+/*
+ * Copyright: Matias Bjorling <mb at lightnvm.io>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ */
+
+#ifndef BM_HB_H_
+#define BM_HB_H_
+
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+
+#include <linux/lightnvm.h>
+
+struct bm_lun {
+	struct nvm_lun vlun;
+
+	int reserved_blocks;
+	/* lun block lists */
+	struct list_head used_list;	/* In-use blocks */
+	struct list_head free_list;	/* Not used blocks i.e. released
+					 * and ready for use */
+	struct list_head bb_list;	/* Bad blocks. Mutually exclusive with
+					   free_list and used_list */
+
+	struct nvm_id_chnl *chnl;
+};
+
+struct bm_hb {
+	int nr_luns;
+	struct bm_lun *luns;
+};
+
+#define bm_for_each_lun(bm, lun, i) \
+		for ((i) = 0, lun = &(bm)->luns[0]; \
+			(i) < (bm)->nr_luns; (i)++, lun = &(bm)->luns[(i)])
+
+#endif /* BM_HB_H_ */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v7 4/5] null_nvm: Lightnvm test driver
  2015-08-07 14:29 ` Matias Bjørling
@ 2015-08-07 14:29   ` Matias Bjørling
  -1 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-08-07 14:29 UTC (permalink / raw)
  To: hch, axboe, linux-fsdevel, linux-kernel, linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

This driver implements the I/O flow for a LightNVM device driver. It
does no transfers. It can be used to test setup/teardown of devices and
evaluating performance of block managers and targets.

The framework of the driver is derived from the null_blk module.

Signed-off-by: Matias Bjørling <mb@lightnvm.io>
---
 drivers/lightnvm/Kconfig    |   3 +
 drivers/lightnvm/Makefile   |   1 +
 drivers/lightnvm/null_nvm.c | 481 ++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 485 insertions(+)
 create mode 100644 drivers/lightnvm/null_nvm.c

diff --git a/drivers/lightnvm/Kconfig b/drivers/lightnvm/Kconfig
index 37b00ae..a80eef5 100644
--- a/drivers/lightnvm/Kconfig
+++ b/drivers/lightnvm/Kconfig
@@ -30,4 +30,7 @@ config NVM_BM_HB
 	while keeping data placement and garbage collection decisions on the
 	host.
 
+config NVM_NULL_NVM
+	tristate "Null test LightNVM driver"
+
 endif # NVM
diff --git a/drivers/lightnvm/Makefile b/drivers/lightnvm/Makefile
index 9ff4669..1892d73 100644
--- a/drivers/lightnvm/Makefile
+++ b/drivers/lightnvm/Makefile
@@ -5,3 +5,4 @@
 obj-$(CONFIG_NVM)		:= core.o
 obj-$(CONFIG_NVM_RRPC)		+= rrpc.o
 obj-$(CONFIG_NVM_BM_HB) 	+= bm_hb.o
+obj-$(CONFIG_NVM_NULL_NVM)	+= null_nvm.o
diff --git a/drivers/lightnvm/null_nvm.c b/drivers/lightnvm/null_nvm.c
new file mode 100644
index 0000000..05f1cbd
--- /dev/null
+++ b/drivers/lightnvm/null_nvm.c
@@ -0,0 +1,481 @@
+/*
+ * derived from Jens Axboe's block/null_blk.c
+ */
+
+#include <linux/module.h>
+
+#include <linux/moduleparam.h>
+#include <linux/sched.h>
+#include <linux/blkdev.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/blk-mq.h>
+#include <linux/hrtimer.h>
+#include <linux/lightnvm.h>
+
+static struct kmem_cache *ppa_cache;
+struct nulln_cmd {
+	struct llist_node ll_list;
+	struct request *rq;
+};
+
+struct nulln {
+	struct list_head list;
+	unsigned int index;
+	struct request_queue *q;
+	struct blk_mq_tag_set tag_set;
+	struct hrtimer timer;
+	char disk_name[DISK_NAME_LEN];
+};
+
+static LIST_HEAD(nulln_list);
+static struct mutex nulln_lock;
+static int nulln_indexes;
+
+struct completion_queue {
+	struct llist_head list;
+	struct hrtimer timer;
+};
+
+/*
+ * These are per-cpu for now, they will need to be configured by the
+ * complete_queues parameter and appropriately mapped.
+ */
+static DEFINE_PER_CPU(struct completion_queue, completion_queues);
+
+enum {
+	NULL_IRQ_NONE		= 0,
+	NULL_IRQ_SOFTIRQ	= 1,
+	NULL_IRQ_TIMER		= 2,
+};
+
+static int submit_queues;
+module_param(submit_queues, int, S_IRUGO);
+MODULE_PARM_DESC(submit_queues, "Number of submission queues");
+
+static int home_node = NUMA_NO_NODE;
+module_param(home_node, int, S_IRUGO);
+MODULE_PARM_DESC(home_node, "Home node for the device");
+
+static int null_param_store_val(const char *str, int *val, int min, int max)
+{
+	int ret, new_val;
+
+	ret = kstrtoint(str, 10, &new_val);
+	if (ret)
+		return -EINVAL;
+
+	if (new_val < min || new_val > max)
+		return -EINVAL;
+
+	*val = new_val;
+	return 0;
+}
+
+static int gb = 250;
+module_param(gb, int, S_IRUGO);
+MODULE_PARM_DESC(gb, "Size in GB");
+
+static int bs = 4096;
+module_param(bs, int, S_IRUGO);
+MODULE_PARM_DESC(bs, "Block size (in bytes)");
+
+static int nr_devices = 1;
+module_param(nr_devices, int, S_IRUGO);
+MODULE_PARM_DESC(nr_devices, "Number of devices to register");
+
+static int irqmode = NULL_IRQ_SOFTIRQ;
+
+static int null_set_irqmode(const char *str, const struct kernel_param *kp)
+{
+	return null_param_store_val(str, &irqmode, NULL_IRQ_NONE,
+					NULL_IRQ_TIMER);
+}
+
+static const struct kernel_param_ops null_irqmode_param_ops = {
+	.set	= null_set_irqmode,
+	.get	= param_get_int,
+};
+
+device_param_cb(irqmode, &null_irqmode_param_ops, &irqmode, S_IRUGO);
+MODULE_PARM_DESC(irqmode, "IRQ completion handler. 0-none, 1-softirq, 2-timer");
+
+static int completion_nsec = 10000;
+module_param(completion_nsec, int, S_IRUGO);
+MODULE_PARM_DESC(completion_nsec, "Time in ns to complete a request in hardware. Default: 10,000ns");
+
+static int hw_queue_depth = 64;
+module_param(hw_queue_depth, int, S_IRUGO);
+MODULE_PARM_DESC(hw_queue_depth, "Queue depth for each hardware queue. Default: 64");
+
+static bool use_per_node_hctx;
+module_param(use_per_node_hctx, bool, S_IRUGO);
+MODULE_PARM_DESC(use_per_node_hctx, "Use per-node allocation for hardware context queues. Default: false");
+
+static int num_channels = 1;
+module_param(num_channels, int, S_IRUGO);
+MODULE_PARM_DESC(num_channels, "Number of channels to be exposed. Default: 1");
+
+static enum hrtimer_restart null_cmd_timer_expired(struct hrtimer *timer)
+{
+	struct completion_queue *cq;
+	struct llist_node *entry;
+	struct nulln_cmd *cmd;
+
+	cq = &per_cpu(completion_queues, smp_processor_id());
+
+	while ((entry = llist_del_all(&cq->list)) != NULL) {
+		entry = llist_reverse_order(entry);
+		do {
+			cmd = container_of(entry, struct nulln_cmd, ll_list);
+			entry = entry->next;
+			blk_mq_end_request(cmd->rq, 0);
+
+			if (cmd->rq) {
+				struct request_queue *q = cmd->rq->q;
+
+				if (!q->mq_ops && blk_queue_stopped(q)) {
+					spin_lock(q->queue_lock);
+					if (blk_queue_stopped(q))
+						blk_start_queue(q);
+					spin_unlock(q->queue_lock);
+				}
+			}
+		} while (entry);
+	}
+
+	return HRTIMER_NORESTART;
+}
+
+static void null_cmd_end_timer(struct nulln_cmd *cmd)
+{
+	struct completion_queue *cq = &per_cpu(completion_queues, get_cpu());
+
+	cmd->ll_list.next = NULL;
+	if (llist_add(&cmd->ll_list, &cq->list)) {
+		ktime_t kt = ktime_set(0, completion_nsec);
+
+		hrtimer_start(&cq->timer, kt, HRTIMER_MODE_REL_PINNED);
+	}
+
+	put_cpu();
+}
+
+static void null_softirq_done_fn(struct request *rq)
+{
+	blk_mq_end_request(rq, 0);
+}
+
+static inline void null_handle_cmd(struct nulln_cmd *cmd)
+{
+	/* Complete IO by inline, softirq or timer */
+	switch (irqmode) {
+	case NULL_IRQ_SOFTIRQ:
+	case NULL_IRQ_NONE:
+		blk_mq_complete_request(cmd->rq);
+		break;
+	case NULL_IRQ_TIMER:
+		null_cmd_end_timer(cmd);
+		break;
+	}
+}
+
+static int null_id(struct request_queue *q, struct nvm_id *id)
+{
+	sector_t size = gb * 1024 * 1024 * 1024ULL;
+	unsigned long per_chnl_size =
+				size / bs / num_channels;
+	struct nvm_id_chnl *chnl;
+	int i;
+
+	id->ver_id = 0x1;
+	id->nvm_type = NVM_NVMT_BLK;
+	id->nchannels = num_channels;
+
+	id->chnls = kmalloc_array(id->nchannels, sizeof(struct nvm_id_chnl),
+								GFP_KERNEL);
+	if (!id->chnls)
+		return -ENOMEM;
+
+	for (i = 0; i < id->nchannels; i++) {
+		chnl = &id->chnls[i];
+		chnl->queue_size = hw_queue_depth;
+		chnl->gran_read = bs;
+		chnl->gran_write = bs;
+		chnl->gran_erase = bs * 256;
+		chnl->oob_size = 0;
+		chnl->t_r = chnl->t_sqr = 25000; /* 25us */
+		chnl->t_w = chnl->t_sqw = 500000; /* 500us */
+		chnl->t_e = 1500000; /* 1.500us */
+		chnl->io_sched = NVM_IOSCHED_CHANNEL;
+		chnl->laddr_begin = per_chnl_size * i;
+		chnl->laddr_end = per_chnl_size * (i + 1) - 1;
+	}
+
+	return 0;
+}
+
+static int null_get_features(struct request_queue *q,
+						struct nvm_get_features *gf)
+{
+	gf->rsp = NVM_RSP_L2P;
+	gf->ext = 0;
+
+	return 0;
+}
+
+static void null_end_io(struct request *rq, int error)
+{
+	struct nvm_rq *rqd = rq->end_io_data;
+	struct nvm_tgt_instance *ins = rqd->ins;
+
+	ins->tt->end_io(rq->end_io_data, error);
+
+	blk_put_request(rq);
+}
+
+static int null_submit_io(struct request_queue *q, struct nvm_rq *rqd)
+{
+	struct request *rq;
+	struct bio *bio = rqd->bio;
+
+	rq = blk_mq_alloc_request(q, bio_rw(bio), GFP_KERNEL, 0);
+	if (IS_ERR(rq))
+		return -ENOMEM;
+
+	rq->cmd_type = REQ_TYPE_DRV_PRIV;
+	rq->__sector = bio->bi_iter.bi_sector;
+	rq->ioprio = bio_prio(bio);
+
+	if (bio_has_data(bio))
+		rq->nr_phys_segments = bio_phys_segments(q, bio);
+
+	rq->__data_len = bio->bi_iter.bi_size;
+	rq->bio = rq->biotail = bio;
+
+	rq->end_io_data = rqd;
+
+	blk_execute_rq_nowait(q, NULL, rq, 0, null_end_io);
+
+	return 0;
+}
+
+static void *null_create_ppa_pool(struct request_queue *q)
+{
+	mempool_t *virtmem_pool;
+
+	ppa_cache = kmem_cache_create("ppa_list", PAGE_SIZE, 0, 0, NULL);
+	if (!ppa_cache) {
+		pr_err("null_nvm: Unable to craete kmem cache\n");
+		return NULL;
+	}
+
+	virtmem_pool = mempool_create_slab_pool(64, ppa_cache);
+	if (!virtmem_pool) {
+		pr_err("null_nvm: Unable to create virtual memory pool\n");
+		return NULL;
+	}
+
+	return virtmem_pool;
+}
+
+static void null_destroy_ppa_pool(void *pool)
+{
+	mempool_t *virtmem_pool = pool;
+
+	mempool_destroy(virtmem_pool);
+}
+
+static void *null_alloc_ppalist(struct request_queue *q, void *pool,
+				gfp_t mem_flags, dma_addr_t *dma_handler)
+{
+
+	struct sector_t *ppa_list;
+	mempool_t *virtmem_pool = pool;
+
+	ppa_list = mempool_alloc(virtmem_pool, mem_flags);
+	if (!ppa_list) {
+		pr_err("null_nvm: Unable to allocate virtual memory\n");
+		return NULL;
+	}
+
+	return ppa_list;
+}
+
+static void null_free_ppalist(void *pool, void *ppa_list,
+							dma_addr_t dma_handler)
+{
+	mempool_t *virtmem_pool = pool;
+
+	mempool_free(ppa_list, virtmem_pool);
+}
+
+static struct nvm_dev_ops nulln_dev_ops = {
+	.identify	= null_id,
+
+	.get_features		= null_get_features,
+
+	.submit_io		= null_submit_io,
+
+	.create_ppa_pool	= null_create_ppa_pool,
+	.destroy_ppa_pool	= null_destroy_ppa_pool,
+	.alloc_ppalist		= null_alloc_ppalist,
+	.free_ppalist		= null_free_ppalist,
+
+	/* Emulate nvme protocol */
+	.max_phys_sect		= 64,
+};
+
+static int null_queue_rq(struct blk_mq_hw_ctx *hctx,
+			 const struct blk_mq_queue_data *bd)
+{
+	struct nulln_cmd *cmd = blk_mq_rq_to_pdu(bd->rq);
+
+	cmd->rq = bd->rq;
+
+	blk_mq_start_request(bd->rq);
+
+	null_handle_cmd(cmd);
+	return BLK_MQ_RQ_QUEUE_OK;
+}
+
+static struct blk_mq_ops null_mq_ops = {
+	.queue_rq	= null_queue_rq,
+	.map_queue	= blk_mq_map_queue,
+	.complete	= null_softirq_done_fn,
+};
+
+static void null_del_dev(struct nulln *nulln)
+{
+	list_del_init(&nulln->list);
+
+	nvm_unregister(nulln->disk_name);
+
+	blk_cleanup_queue(nulln->q);
+	blk_mq_free_tag_set(&nulln->tag_set);
+	kfree(nulln);
+}
+
+static int null_add_dev(void)
+{
+	struct nulln *nulln;
+	int rv;
+
+	nulln = kzalloc_node(sizeof(*nulln), GFP_KERNEL, home_node);
+	if (!nulln) {
+		rv = -ENOMEM;
+		goto out;
+	}
+
+	if (use_per_node_hctx)
+		submit_queues = nr_online_nodes;
+
+	nulln->tag_set.ops = &null_mq_ops;
+	nulln->tag_set.nr_hw_queues = submit_queues;
+	nulln->tag_set.queue_depth = hw_queue_depth;
+	nulln->tag_set.numa_node = home_node;
+	nulln->tag_set.cmd_size = sizeof(struct nulln_cmd);
+	nulln->tag_set.driver_data = nulln;
+
+	rv = blk_mq_alloc_tag_set(&nulln->tag_set);
+	if (rv)
+		goto out_free_nulln;
+
+	nulln->q = blk_mq_init_queue(&nulln->tag_set);
+	if (IS_ERR(nulln->q)) {
+		rv = -ENOMEM;
+		goto out_cleanup_tags;
+	}
+
+	nulln->q->queuedata = nulln;
+	queue_flag_set_unlocked(QUEUE_FLAG_NONROT, nulln->q);
+	queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, nulln->q);
+
+	mutex_lock(&nulln_lock);
+	list_add_tail(&nulln->list, &nulln_list);
+	nulln->index = nulln_indexes++;
+	mutex_unlock(&nulln_lock);
+
+	blk_queue_logical_block_size(nulln->q, bs);
+	blk_queue_physical_block_size(nulln->q, bs);
+
+	sprintf(nulln->disk_name, "nulln%d", nulln->index);
+
+	rv = nvm_register(nulln->q, nulln->disk_name, &nulln_dev_ops);
+	if (rv)
+		goto out_cleanup_blk_queue;
+
+	return 0;
+
+out_cleanup_blk_queue:
+	blk_cleanup_queue(nulln->q);
+out_cleanup_tags:
+	blk_mq_free_tag_set(&nulln->tag_set);
+out_free_nulln:
+	kfree(nulln);
+out:
+	return rv;
+}
+
+static int __init null_init(void)
+{
+	unsigned int i;
+
+	if (bs > PAGE_SIZE) {
+		pr_warn("null_nvm: invalid block size\n");
+		pr_warn("null_nvm: defaults block size to %lu\n", PAGE_SIZE);
+		bs = PAGE_SIZE;
+	}
+
+	if (use_per_node_hctx) {
+		if (submit_queues < nr_online_nodes) {
+			pr_warn("null_nvm: submit_queues param is set to %u.",
+							nr_online_nodes);
+			submit_queues = nr_online_nodes;
+		}
+	} else if (submit_queues > nr_cpu_ids)
+		submit_queues = nr_cpu_ids;
+	else if (!submit_queues)
+		submit_queues = 1;
+
+	mutex_init(&nulln_lock);
+
+	/* Initialize a separate list for each CPU for issuing softirqs */
+	for_each_possible_cpu(i) {
+		struct completion_queue *cq = &per_cpu(completion_queues, i);
+
+		init_llist_head(&cq->list);
+
+		if (irqmode != NULL_IRQ_TIMER)
+			continue;
+
+		hrtimer_init(&cq->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+		cq->timer.function = null_cmd_timer_expired;
+	}
+
+	for (i = 0; i < nr_devices; i++) {
+		if (null_add_dev())
+			return -EINVAL;
+	}
+
+	pr_info("null_nvm: module loaded\n");
+	return 0;
+}
+
+static void __exit null_exit(void)
+{
+	struct nulln *nulln;
+
+	mutex_lock(&nulln_lock);
+	while (!list_empty(&nulln_list)) {
+		nulln = list_entry(nulln_list.next, struct nulln, list);
+		null_del_dev(nulln);
+	}
+	mutex_unlock(&nulln_lock);
+}
+
+module_init(null_init);
+module_exit(null_exit);
+
+MODULE_AUTHOR("Matias Bjorling <mb@lightnvm.io>");
+MODULE_LICENSE("GPL");
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v7 4/5] null_nvm: Lightnvm test driver
@ 2015-08-07 14:29   ` Matias Bjørling
  0 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-08-07 14:29 UTC (permalink / raw)


This driver implements the I/O flow for a LightNVM device driver. It
does no transfers. It can be used to test setup/teardown of devices and
evaluating performance of block managers and targets.

The framework of the driver is derived from the null_blk module.

Signed-off-by: Matias Bj?rling <mb at lightnvm.io>
---
 drivers/lightnvm/Kconfig    |   3 +
 drivers/lightnvm/Makefile   |   1 +
 drivers/lightnvm/null_nvm.c | 481 ++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 485 insertions(+)
 create mode 100644 drivers/lightnvm/null_nvm.c

diff --git a/drivers/lightnvm/Kconfig b/drivers/lightnvm/Kconfig
index 37b00ae..a80eef5 100644
--- a/drivers/lightnvm/Kconfig
+++ b/drivers/lightnvm/Kconfig
@@ -30,4 +30,7 @@ config NVM_BM_HB
 	while keeping data placement and garbage collection decisions on the
 	host.
 
+config NVM_NULL_NVM
+	tristate "Null test LightNVM driver"
+
 endif # NVM
diff --git a/drivers/lightnvm/Makefile b/drivers/lightnvm/Makefile
index 9ff4669..1892d73 100644
--- a/drivers/lightnvm/Makefile
+++ b/drivers/lightnvm/Makefile
@@ -5,3 +5,4 @@
 obj-$(CONFIG_NVM)		:= core.o
 obj-$(CONFIG_NVM_RRPC)		+= rrpc.o
 obj-$(CONFIG_NVM_BM_HB) 	+= bm_hb.o
+obj-$(CONFIG_NVM_NULL_NVM)	+= null_nvm.o
diff --git a/drivers/lightnvm/null_nvm.c b/drivers/lightnvm/null_nvm.c
new file mode 100644
index 0000000..05f1cbd
--- /dev/null
+++ b/drivers/lightnvm/null_nvm.c
@@ -0,0 +1,481 @@
+/*
+ * derived from Jens Axboe's block/null_blk.c
+ */
+
+#include <linux/module.h>
+
+#include <linux/moduleparam.h>
+#include <linux/sched.h>
+#include <linux/blkdev.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/blk-mq.h>
+#include <linux/hrtimer.h>
+#include <linux/lightnvm.h>
+
+static struct kmem_cache *ppa_cache;
+struct nulln_cmd {
+	struct llist_node ll_list;
+	struct request *rq;
+};
+
+struct nulln {
+	struct list_head list;
+	unsigned int index;
+	struct request_queue *q;
+	struct blk_mq_tag_set tag_set;
+	struct hrtimer timer;
+	char disk_name[DISK_NAME_LEN];
+};
+
+static LIST_HEAD(nulln_list);
+static struct mutex nulln_lock;
+static int nulln_indexes;
+
+struct completion_queue {
+	struct llist_head list;
+	struct hrtimer timer;
+};
+
+/*
+ * These are per-cpu for now, they will need to be configured by the
+ * complete_queues parameter and appropriately mapped.
+ */
+static DEFINE_PER_CPU(struct completion_queue, completion_queues);
+
+enum {
+	NULL_IRQ_NONE		= 0,
+	NULL_IRQ_SOFTIRQ	= 1,
+	NULL_IRQ_TIMER		= 2,
+};
+
+static int submit_queues;
+module_param(submit_queues, int, S_IRUGO);
+MODULE_PARM_DESC(submit_queues, "Number of submission queues");
+
+static int home_node = NUMA_NO_NODE;
+module_param(home_node, int, S_IRUGO);
+MODULE_PARM_DESC(home_node, "Home node for the device");
+
+static int null_param_store_val(const char *str, int *val, int min, int max)
+{
+	int ret, new_val;
+
+	ret = kstrtoint(str, 10, &new_val);
+	if (ret)
+		return -EINVAL;
+
+	if (new_val < min || new_val > max)
+		return -EINVAL;
+
+	*val = new_val;
+	return 0;
+}
+
+static int gb = 250;
+module_param(gb, int, S_IRUGO);
+MODULE_PARM_DESC(gb, "Size in GB");
+
+static int bs = 4096;
+module_param(bs, int, S_IRUGO);
+MODULE_PARM_DESC(bs, "Block size (in bytes)");
+
+static int nr_devices = 1;
+module_param(nr_devices, int, S_IRUGO);
+MODULE_PARM_DESC(nr_devices, "Number of devices to register");
+
+static int irqmode = NULL_IRQ_SOFTIRQ;
+
+static int null_set_irqmode(const char *str, const struct kernel_param *kp)
+{
+	return null_param_store_val(str, &irqmode, NULL_IRQ_NONE,
+					NULL_IRQ_TIMER);
+}
+
+static const struct kernel_param_ops null_irqmode_param_ops = {
+	.set	= null_set_irqmode,
+	.get	= param_get_int,
+};
+
+device_param_cb(irqmode, &null_irqmode_param_ops, &irqmode, S_IRUGO);
+MODULE_PARM_DESC(irqmode, "IRQ completion handler. 0-none, 1-softirq, 2-timer");
+
+static int completion_nsec = 10000;
+module_param(completion_nsec, int, S_IRUGO);
+MODULE_PARM_DESC(completion_nsec, "Time in ns to complete a request in hardware. Default: 10,000ns");
+
+static int hw_queue_depth = 64;
+module_param(hw_queue_depth, int, S_IRUGO);
+MODULE_PARM_DESC(hw_queue_depth, "Queue depth for each hardware queue. Default: 64");
+
+static bool use_per_node_hctx;
+module_param(use_per_node_hctx, bool, S_IRUGO);
+MODULE_PARM_DESC(use_per_node_hctx, "Use per-node allocation for hardware context queues. Default: false");
+
+static int num_channels = 1;
+module_param(num_channels, int, S_IRUGO);
+MODULE_PARM_DESC(num_channels, "Number of channels to be exposed. Default: 1");
+
+static enum hrtimer_restart null_cmd_timer_expired(struct hrtimer *timer)
+{
+	struct completion_queue *cq;
+	struct llist_node *entry;
+	struct nulln_cmd *cmd;
+
+	cq = &per_cpu(completion_queues, smp_processor_id());
+
+	while ((entry = llist_del_all(&cq->list)) != NULL) {
+		entry = llist_reverse_order(entry);
+		do {
+			cmd = container_of(entry, struct nulln_cmd, ll_list);
+			entry = entry->next;
+			blk_mq_end_request(cmd->rq, 0);
+
+			if (cmd->rq) {
+				struct request_queue *q = cmd->rq->q;
+
+				if (!q->mq_ops && blk_queue_stopped(q)) {
+					spin_lock(q->queue_lock);
+					if (blk_queue_stopped(q))
+						blk_start_queue(q);
+					spin_unlock(q->queue_lock);
+				}
+			}
+		} while (entry);
+	}
+
+	return HRTIMER_NORESTART;
+}
+
+static void null_cmd_end_timer(struct nulln_cmd *cmd)
+{
+	struct completion_queue *cq = &per_cpu(completion_queues, get_cpu());
+
+	cmd->ll_list.next = NULL;
+	if (llist_add(&cmd->ll_list, &cq->list)) {
+		ktime_t kt = ktime_set(0, completion_nsec);
+
+		hrtimer_start(&cq->timer, kt, HRTIMER_MODE_REL_PINNED);
+	}
+
+	put_cpu();
+}
+
+static void null_softirq_done_fn(struct request *rq)
+{
+	blk_mq_end_request(rq, 0);
+}
+
+static inline void null_handle_cmd(struct nulln_cmd *cmd)
+{
+	/* Complete IO by inline, softirq or timer */
+	switch (irqmode) {
+	case NULL_IRQ_SOFTIRQ:
+	case NULL_IRQ_NONE:
+		blk_mq_complete_request(cmd->rq);
+		break;
+	case NULL_IRQ_TIMER:
+		null_cmd_end_timer(cmd);
+		break;
+	}
+}
+
+static int null_id(struct request_queue *q, struct nvm_id *id)
+{
+	sector_t size = gb * 1024 * 1024 * 1024ULL;
+	unsigned long per_chnl_size =
+				size / bs / num_channels;
+	struct nvm_id_chnl *chnl;
+	int i;
+
+	id->ver_id = 0x1;
+	id->nvm_type = NVM_NVMT_BLK;
+	id->nchannels = num_channels;
+
+	id->chnls = kmalloc_array(id->nchannels, sizeof(struct nvm_id_chnl),
+								GFP_KERNEL);
+	if (!id->chnls)
+		return -ENOMEM;
+
+	for (i = 0; i < id->nchannels; i++) {
+		chnl = &id->chnls[i];
+		chnl->queue_size = hw_queue_depth;
+		chnl->gran_read = bs;
+		chnl->gran_write = bs;
+		chnl->gran_erase = bs * 256;
+		chnl->oob_size = 0;
+		chnl->t_r = chnl->t_sqr = 25000; /* 25us */
+		chnl->t_w = chnl->t_sqw = 500000; /* 500us */
+		chnl->t_e = 1500000; /* 1.500us */
+		chnl->io_sched = NVM_IOSCHED_CHANNEL;
+		chnl->laddr_begin = per_chnl_size * i;
+		chnl->laddr_end = per_chnl_size * (i + 1) - 1;
+	}
+
+	return 0;
+}
+
+static int null_get_features(struct request_queue *q,
+						struct nvm_get_features *gf)
+{
+	gf->rsp = NVM_RSP_L2P;
+	gf->ext = 0;
+
+	return 0;
+}
+
+static void null_end_io(struct request *rq, int error)
+{
+	struct nvm_rq *rqd = rq->end_io_data;
+	struct nvm_tgt_instance *ins = rqd->ins;
+
+	ins->tt->end_io(rq->end_io_data, error);
+
+	blk_put_request(rq);
+}
+
+static int null_submit_io(struct request_queue *q, struct nvm_rq *rqd)
+{
+	struct request *rq;
+	struct bio *bio = rqd->bio;
+
+	rq = blk_mq_alloc_request(q, bio_rw(bio), GFP_KERNEL, 0);
+	if (IS_ERR(rq))
+		return -ENOMEM;
+
+	rq->cmd_type = REQ_TYPE_DRV_PRIV;
+	rq->__sector = bio->bi_iter.bi_sector;
+	rq->ioprio = bio_prio(bio);
+
+	if (bio_has_data(bio))
+		rq->nr_phys_segments = bio_phys_segments(q, bio);
+
+	rq->__data_len = bio->bi_iter.bi_size;
+	rq->bio = rq->biotail = bio;
+
+	rq->end_io_data = rqd;
+
+	blk_execute_rq_nowait(q, NULL, rq, 0, null_end_io);
+
+	return 0;
+}
+
+static void *null_create_ppa_pool(struct request_queue *q)
+{
+	mempool_t *virtmem_pool;
+
+	ppa_cache = kmem_cache_create("ppa_list", PAGE_SIZE, 0, 0, NULL);
+	if (!ppa_cache) {
+		pr_err("null_nvm: Unable to craete kmem cache\n");
+		return NULL;
+	}
+
+	virtmem_pool = mempool_create_slab_pool(64, ppa_cache);
+	if (!virtmem_pool) {
+		pr_err("null_nvm: Unable to create virtual memory pool\n");
+		return NULL;
+	}
+
+	return virtmem_pool;
+}
+
+static void null_destroy_ppa_pool(void *pool)
+{
+	mempool_t *virtmem_pool = pool;
+
+	mempool_destroy(virtmem_pool);
+}
+
+static void *null_alloc_ppalist(struct request_queue *q, void *pool,
+				gfp_t mem_flags, dma_addr_t *dma_handler)
+{
+
+	struct sector_t *ppa_list;
+	mempool_t *virtmem_pool = pool;
+
+	ppa_list = mempool_alloc(virtmem_pool, mem_flags);
+	if (!ppa_list) {
+		pr_err("null_nvm: Unable to allocate virtual memory\n");
+		return NULL;
+	}
+
+	return ppa_list;
+}
+
+static void null_free_ppalist(void *pool, void *ppa_list,
+							dma_addr_t dma_handler)
+{
+	mempool_t *virtmem_pool = pool;
+
+	mempool_free(ppa_list, virtmem_pool);
+}
+
+static struct nvm_dev_ops nulln_dev_ops = {
+	.identify	= null_id,
+
+	.get_features		= null_get_features,
+
+	.submit_io		= null_submit_io,
+
+	.create_ppa_pool	= null_create_ppa_pool,
+	.destroy_ppa_pool	= null_destroy_ppa_pool,
+	.alloc_ppalist		= null_alloc_ppalist,
+	.free_ppalist		= null_free_ppalist,
+
+	/* Emulate nvme protocol */
+	.max_phys_sect		= 64,
+};
+
+static int null_queue_rq(struct blk_mq_hw_ctx *hctx,
+			 const struct blk_mq_queue_data *bd)
+{
+	struct nulln_cmd *cmd = blk_mq_rq_to_pdu(bd->rq);
+
+	cmd->rq = bd->rq;
+
+	blk_mq_start_request(bd->rq);
+
+	null_handle_cmd(cmd);
+	return BLK_MQ_RQ_QUEUE_OK;
+}
+
+static struct blk_mq_ops null_mq_ops = {
+	.queue_rq	= null_queue_rq,
+	.map_queue	= blk_mq_map_queue,
+	.complete	= null_softirq_done_fn,
+};
+
+static void null_del_dev(struct nulln *nulln)
+{
+	list_del_init(&nulln->list);
+
+	nvm_unregister(nulln->disk_name);
+
+	blk_cleanup_queue(nulln->q);
+	blk_mq_free_tag_set(&nulln->tag_set);
+	kfree(nulln);
+}
+
+static int null_add_dev(void)
+{
+	struct nulln *nulln;
+	int rv;
+
+	nulln = kzalloc_node(sizeof(*nulln), GFP_KERNEL, home_node);
+	if (!nulln) {
+		rv = -ENOMEM;
+		goto out;
+	}
+
+	if (use_per_node_hctx)
+		submit_queues = nr_online_nodes;
+
+	nulln->tag_set.ops = &null_mq_ops;
+	nulln->tag_set.nr_hw_queues = submit_queues;
+	nulln->tag_set.queue_depth = hw_queue_depth;
+	nulln->tag_set.numa_node = home_node;
+	nulln->tag_set.cmd_size = sizeof(struct nulln_cmd);
+	nulln->tag_set.driver_data = nulln;
+
+	rv = blk_mq_alloc_tag_set(&nulln->tag_set);
+	if (rv)
+		goto out_free_nulln;
+
+	nulln->q = blk_mq_init_queue(&nulln->tag_set);
+	if (IS_ERR(nulln->q)) {
+		rv = -ENOMEM;
+		goto out_cleanup_tags;
+	}
+
+	nulln->q->queuedata = nulln;
+	queue_flag_set_unlocked(QUEUE_FLAG_NONROT, nulln->q);
+	queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, nulln->q);
+
+	mutex_lock(&nulln_lock);
+	list_add_tail(&nulln->list, &nulln_list);
+	nulln->index = nulln_indexes++;
+	mutex_unlock(&nulln_lock);
+
+	blk_queue_logical_block_size(nulln->q, bs);
+	blk_queue_physical_block_size(nulln->q, bs);
+
+	sprintf(nulln->disk_name, "nulln%d", nulln->index);
+
+	rv = nvm_register(nulln->q, nulln->disk_name, &nulln_dev_ops);
+	if (rv)
+		goto out_cleanup_blk_queue;
+
+	return 0;
+
+out_cleanup_blk_queue:
+	blk_cleanup_queue(nulln->q);
+out_cleanup_tags:
+	blk_mq_free_tag_set(&nulln->tag_set);
+out_free_nulln:
+	kfree(nulln);
+out:
+	return rv;
+}
+
+static int __init null_init(void)
+{
+	unsigned int i;
+
+	if (bs > PAGE_SIZE) {
+		pr_warn("null_nvm: invalid block size\n");
+		pr_warn("null_nvm: defaults block size to %lu\n", PAGE_SIZE);
+		bs = PAGE_SIZE;
+	}
+
+	if (use_per_node_hctx) {
+		if (submit_queues < nr_online_nodes) {
+			pr_warn("null_nvm: submit_queues param is set to %u.",
+							nr_online_nodes);
+			submit_queues = nr_online_nodes;
+		}
+	} else if (submit_queues > nr_cpu_ids)
+		submit_queues = nr_cpu_ids;
+	else if (!submit_queues)
+		submit_queues = 1;
+
+	mutex_init(&nulln_lock);
+
+	/* Initialize a separate list for each CPU for issuing softirqs */
+	for_each_possible_cpu(i) {
+		struct completion_queue *cq = &per_cpu(completion_queues, i);
+
+		init_llist_head(&cq->list);
+
+		if (irqmode != NULL_IRQ_TIMER)
+			continue;
+
+		hrtimer_init(&cq->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+		cq->timer.function = null_cmd_timer_expired;
+	}
+
+	for (i = 0; i < nr_devices; i++) {
+		if (null_add_dev())
+			return -EINVAL;
+	}
+
+	pr_info("null_nvm: module loaded\n");
+	return 0;
+}
+
+static void __exit null_exit(void)
+{
+	struct nulln *nulln;
+
+	mutex_lock(&nulln_lock);
+	while (!list_empty(&nulln_list)) {
+		nulln = list_entry(nulln_list.next, struct nulln, list);
+		null_del_dev(nulln);
+	}
+	mutex_unlock(&nulln_lock);
+}
+
+module_init(null_init);
+module_exit(null_exit);
+
+MODULE_AUTHOR("Matias Bjorling <mb at lightnvm.io>");
+MODULE_LICENSE("GPL");
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v7 5/5] nvme: LightNVM support
  2015-08-07 14:29 ` Matias Bjørling
@ 2015-08-07 14:29   ` Matias Bjørling
  -1 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-08-07 14:29 UTC (permalink / raw)
  To: hch, axboe, linux-fsdevel, linux-kernel, linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

The first generation of Open-Channel SSDs will be based on NVMe. The
integration requires that a NVMe device exposes itself as a LightNVM
device. The way this is done currently is by hooking into the
Controller Capabilities (CAP register) and a bit in NSFEAT for each
namespace.

After detection, vendor specific codes are used to identify the device
and enumerate supported features.

Signed-off-by: Javier González <jg@lightnvm.io>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
---
 drivers/block/Makefile        |   2 +-
 drivers/block/nvme-core.c     |  23 +-
 drivers/block/nvme-lightnvm.c | 568 ++++++++++++++++++++++++++++++++++++++++++
 include/linux/nvme.h          |   6 +
 include/uapi/linux/nvme.h     |   3 +
 5 files changed, 598 insertions(+), 4 deletions(-)
 create mode 100644 drivers/block/nvme-lightnvm.c

diff --git a/drivers/block/Makefile b/drivers/block/Makefile
index 02b688d..a01d7d8 100644
--- a/drivers/block/Makefile
+++ b/drivers/block/Makefile
@@ -44,6 +44,6 @@ obj-$(CONFIG_BLK_DEV_RSXX) += rsxx/
 obj-$(CONFIG_BLK_DEV_NULL_BLK)	+= null_blk.o
 obj-$(CONFIG_ZRAM) += zram/
 
-nvme-y		:= nvme-core.o nvme-scsi.o
+nvme-y		:= nvme-core.o nvme-scsi.o nvme-lightnvm.o
 skd-y		:= skd_main.o
 swim_mod-y	:= swim.o swim_asm.o
diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
index 666e994..e47bd4b 100644
--- a/drivers/block/nvme-core.c
+++ b/drivers/block/nvme-core.c
@@ -40,6 +40,7 @@
 #include <linux/slab.h>
 #include <linux/t10-pi.h>
 #include <linux/types.h>
+#include <linux/lightnvm.h>
 #include <scsi/sg.h>
 #include <asm-generic/io-64-nonatomic-lo-hi.h>
 
@@ -1751,7 +1752,8 @@ static int nvme_configure_admin_queue(struct nvme_dev *dev)
 
 	dev->page_size = 1 << page_shift;
 
-	dev->ctrl_config = NVME_CC_CSS_NVM;
+	dev->ctrl_config = NVME_CAP_LIGHTNVM(cap) ?
+					NVME_CC_CSS_LIGHTNVM : NVME_CC_CSS_NVM;
 	dev->ctrl_config |= (page_shift - 12) << NVME_CC_MPS_SHIFT;
 	dev->ctrl_config |= NVME_CC_ARB_RR | NVME_CC_SHN_NONE;
 	dev->ctrl_config |= NVME_CC_IOSQES | NVME_CC_IOCQES;
@@ -1997,6 +1999,17 @@ static int nvme_revalidate_disk(struct gendisk *disk)
 		return -ENODEV;
 	}
 
+	if ((dev->ctrl_config & NVME_CC_CSS_LIGHTNVM) &&
+		id->nsfeat & NVME_NS_FEAT_NVM && ns->type != NVME_NS_NVM) {
+		if (nvme_nvm_register(ns->queue, disk->disk_name)) {
+			dev_warn(dev->dev,
+				    "%s: LightNVM init failure\n", __func__);
+			kfree(id);
+			return -ENODEV;
+		}
+		ns->type = NVME_NS_NVM;
+	}
+
 	old_ms = ns->ms;
 	lbaf = id->flbas & NVME_NS_FLBAS_LBA_MASK;
 	ns->lba_shift = id->lbaf[lbaf].ds;
@@ -2028,7 +2041,7 @@ static int nvme_revalidate_disk(struct gendisk *disk)
 								!ns->ext)
 		nvme_init_integrity(ns);
 
-	if (ns->ms && !blk_get_integrity(disk))
+	if ((ns->ms && !blk_get_integrity(disk)) || ns->type == NVME_NS_NVM)
 		set_capacity(disk, 0);
 	else
 		set_capacity(disk, le64_to_cpup(&id->nsze) << (ns->lba_shift - 9));
@@ -2146,7 +2159,8 @@ static void nvme_alloc_ns(struct nvme_dev *dev, unsigned nsid)
 	if (nvme_revalidate_disk(ns->disk))
 		goto out_free_disk;
 
-	add_disk(ns->disk);
+	if (ns->type != NVME_NS_NVM)
+		add_disk(ns->disk);
 	if (ns->ms) {
 		struct block_device *bd = bdget_disk(ns->disk, 0);
 		if (!bd)
@@ -2345,6 +2359,9 @@ static void nvme_free_namespace(struct nvme_ns *ns)
 {
 	list_del(&ns->list);
 
+	if (ns->type == NVME_NS_NVM)
+		nvme_nvm_unregister(ns->disk->disk_name);
+
 	spin_lock(&dev_list_lock);
 	ns->disk->private_data = NULL;
 	spin_unlock(&dev_list_lock);
diff --git a/drivers/block/nvme-lightnvm.c b/drivers/block/nvme-lightnvm.c
new file mode 100644
index 0000000..8ad84c9
--- /dev/null
+++ b/drivers/block/nvme-lightnvm.c
@@ -0,0 +1,568 @@
+/*
+ * nvme-lightnvm.c - LightNVM NVMe device
+ *
+ * Copyright (C) 2014-2015 IT University of Copenhagen
+ * Initial release: Matias Bjorling <mb@lightnvm.io>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; see the file COPYING.  If not, write to
+ * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139,
+ * USA.
+ *
+ */
+
+#include <linux/nvme.h>
+#include <linux/bitops.h>
+#include <linux/lightnvm.h>
+
+#ifdef CONFIG_NVM
+
+enum nvme_nvm_opcode {
+	nvme_nvm_cmd_hb_write	= 0x81,
+	nvme_nvm_cmd_hb_read	= 0x02,
+	nvme_nvm_cmd_phys_write	= 0x91,
+	nvme_nvm_cmd_phys_read	= 0x92,
+	nvme_nvm_cmd_erase	= 0x90,
+};
+
+enum nvme_nvm_admin_opcode {
+	nvme_nvm_admin_identify		= 0xe2,
+	nvme_nvm_admin_get_features	= 0xe6,
+	nvme_nvm_admin_set_resp		= 0xe5,
+	nvme_nvm_admin_get_l2p_tbl	= 0xea,
+	nvme_nvm_admin_get_bb_tbl	= 0xf2,
+	nvme_nvm_admin_set_bb_tbl	= 0xf1,
+};
+
+struct nvme_nvm_hb_rw {
+	__u8			opcode;
+	__u8			flags;
+	__u16			command_id;
+	__le32			nsid;
+	__u64			rsvd2;
+	__le64			metadata;
+	__le64			prp1;
+	__le64			prp2;
+	__le64			slba;
+	__le16			length;
+	__le16			control;
+	__le32			dsmgmt;
+	__le64			phys_addr;
+};
+
+struct nvme_nvm_identify {
+	__u8			opcode;
+	__u8			flags;
+	__u16			command_id;
+	__le32			nsid;
+	__u64			rsvd[2];
+	__le64			prp1;
+	__le64			prp2;
+	__le32			chnl_off;
+	__u32			rsvd11[5];
+};
+
+struct nvme_nvm_l2ptbl {
+	__u8			opcode;
+	__u8			flags;
+	__u16			command_id;
+	__le32			nsid;
+	__le32			cdw2[4];
+	__le64			prp1;
+	__le64			prp2;
+	__le64			slba;
+	__le32			nlb;
+	__le16			cdw14[6];
+};
+
+struct nvme_nvm_bbtbl {
+	__u8			opcode;
+	__u8			flags;
+	__u16			command_id;
+	__le32			nsid;
+	__u64			rsvd[2];
+	__le64			prp1;
+	__le64			prp2;
+	__le32			prp1_len;
+	__le32			prp2_len;
+	__le32			lbb;
+	__u32			rsvd11[3];
+};
+
+struct nvme_nvm_set_resp {
+	__u8			opcode;
+	__u8			flags;
+	__u16			command_id;
+	__le32			nsid;
+	__u64			rsvd[2];
+	__le64			prp1;
+	__le64			prp2;
+	__le64			resp;
+	__u32			rsvd11[4];
+};
+
+struct nvme_nvm_erase_blk {
+	__u8			opcode;
+	__u8			flags;
+	__u16			command_id;
+	__le32			nsid;
+	__u64			rsvd[2];
+	__le64			prp1;
+	__le64			prp2;
+	__le64			blk_addr;
+	__u32			rsvd11[4];
+};
+
+struct nvme_nvm_command {
+	union {
+		struct nvme_common_command common;
+		struct nvme_nvm_identify nvm_identify;
+		struct nvme_nvm_hb_rw nvm_hb_rw;
+		struct nvme_nvm_l2ptbl nvm_l2p;
+		struct nvme_nvm_bbtbl nvm_get_bb;
+		struct nvme_nvm_bbtbl nvm_set_bb;
+		struct nvme_nvm_set_resp nvm_resp;
+		struct nvme_nvm_erase_blk nvm_erase;
+	};
+};
+
+/*
+ * Check we didin't inadvertently grow the command struct
+ */
+static inline void _nvme_nvm_check_size(void)
+{
+	BUILD_BUG_ON(sizeof(struct nvme_nvm_identify) != 64);
+	BUILD_BUG_ON(sizeof(struct nvme_nvm_hb_rw) != 64);
+	BUILD_BUG_ON(sizeof(struct nvme_nvm_l2ptbl) != 64);
+	BUILD_BUG_ON(sizeof(struct nvme_nvm_bbtbl) != 64);
+	BUILD_BUG_ON(sizeof(struct nvme_nvm_set_resp) != 64);
+	BUILD_BUG_ON(sizeof(struct nvme_nvm_erase_blk) != 64);
+}
+
+struct nvme_nvm_id_chnl {
+	__le64			laddr_begin;
+	__le64			laddr_end;
+	__le32			oob_size;
+	__le32			queue_size;
+	__le32			gran_read;
+	__le32			gran_write;
+	__le32			gran_erase;
+	__le32			t_r;
+	__le32			t_sqr;
+	__le32			t_w;
+	__le32			t_sqw;
+	__le32			t_e;
+	__le16			chnl_parallelism;
+	__u8			io_sched;
+	__u8			reserved[133];
+} __packed;
+
+struct nvme_nvm_id {
+	__u8			ver_id;
+	__u8			nvm_type;
+	__le16			nchannels;
+	__u8			reserved[252];
+	struct nvme_nvm_id_chnl	chnls[];
+} __packed;
+
+#define NVME_NVM_CHNLS_PR_REQ ((4096U - sizeof(struct nvme_nvm_id)) \
+					/ sizeof(struct nvme_nvm_id_chnl))
+
+
+static int init_chnls(struct request_queue *q, struct nvm_id *nvm_id,
+						struct nvme_nvm_id *nvme_nvm_id)
+{
+	struct nvme_nvm_id_chnl *src = nvme_nvm_id->chnls;
+	struct nvm_id_chnl *dst = nvm_id->chnls;
+	struct nvme_ns *ns = q->queuedata;
+	struct nvme_nvm_command c = {
+		.nvm_identify.opcode = nvme_nvm_admin_identify,
+		.nvm_identify.nsid = cpu_to_le32(ns->ns_id),
+	};
+	unsigned int len = nvm_id->nchannels;
+	int i, end, ret, off = 0;
+
+	while (len) {
+		end = min_t(u32, NVME_NVM_CHNLS_PR_REQ, len);
+
+		for (i = 0; i < end; i++, dst++, src++) {
+			dst->laddr_begin = le64_to_cpu(src->laddr_begin);
+			dst->laddr_end = le64_to_cpu(src->laddr_end);
+			dst->oob_size = le32_to_cpu(src->oob_size);
+			dst->queue_size = le32_to_cpu(src->queue_size);
+			dst->gran_read = le32_to_cpu(src->gran_read);
+			dst->gran_write = le32_to_cpu(src->gran_write);
+			dst->gran_erase = le32_to_cpu(src->gran_erase);
+			dst->t_r = le32_to_cpu(src->t_r);
+			dst->t_sqr = le32_to_cpu(src->t_sqr);
+			dst->t_w = le32_to_cpu(src->t_w);
+			dst->t_sqw = le32_to_cpu(src->t_sqw);
+			dst->t_e = le32_to_cpu(src->t_e);
+			dst->io_sched = src->io_sched;
+		}
+
+		len -= end;
+		if (!len)
+			break;
+
+		off += end;
+
+		c.nvm_identify.chnl_off = off;
+
+		ret = nvme_submit_sync_cmd(q, (struct nvme_command *)&c,
+							nvme_nvm_id, 4096);
+		if (ret)
+			return ret;
+	}
+	return 0;
+}
+
+static int nvme_nvm_identify(struct request_queue *q, struct nvm_id *nvm_id)
+{
+	struct nvme_ns *ns = q->queuedata;
+	struct nvme_nvm_id *nvme_nvm_id;
+	struct nvme_nvm_command c = {
+		.nvm_identify.opcode = nvme_nvm_admin_identify,
+		.nvm_identify.nsid = cpu_to_le32(ns->ns_id),
+		.nvm_identify.chnl_off = 0,
+	};
+	int ret;
+
+	nvme_nvm_id = kmalloc(4096, GFP_KERNEL);
+	if (!nvme_nvm_id)
+		return -ENOMEM;
+
+	ret = nvme_submit_sync_cmd(q, (struct nvme_command *)&c, nvme_nvm_id,
+									4096);
+	if (ret) {
+		ret = -EIO;
+		goto out;
+	}
+
+	nvm_id->ver_id = nvme_nvm_id->ver_id;
+	nvm_id->nvm_type = nvme_nvm_id->nvm_type;
+	nvm_id->nchannels = le16_to_cpu(nvme_nvm_id->nchannels);
+
+	if (!nvm_id->chnls)
+		nvm_id->chnls = kmalloc(sizeof(struct nvm_id_chnl)
+					* nvm_id->nchannels, GFP_KERNEL);
+	if (!nvm_id->chnls) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	ret = init_chnls(q, nvm_id, nvme_nvm_id);
+out:
+	kfree(nvme_nvm_id);
+	return ret;
+}
+
+static int nvme_nvm_get_features(struct request_queue *q,
+						struct nvm_get_features *gf)
+{
+	struct nvme_ns *ns = q->queuedata;
+	struct nvme_nvm_command c = {
+		.common.opcode = nvme_nvm_admin_get_features,
+		.common.nsid = ns->ns_id,
+	};
+	int sz = sizeof(struct nvm_get_features);
+	int ret;
+	u64 *resp;
+
+	resp = kmalloc(sz, GFP_KERNEL);
+	if (!resp)
+		return -ENOMEM;
+
+	ret = nvme_submit_sync_cmd(q, (struct nvme_command *)&c, resp, sz);
+	if (ret)
+		goto done;
+
+	gf->rsp = le64_to_cpu(resp[0]);
+	gf->ext = le64_to_cpu(resp[1]);
+
+done:
+	kfree(resp);
+	return ret;
+}
+
+static int nvme_nvm_set_resp(struct request_queue *q, u64 resp)
+{
+	struct nvme_ns *ns = q->queuedata;
+	struct nvme_nvm_command c = {
+		.nvm_resp.opcode = nvme_nvm_admin_set_resp,
+		.nvm_resp.nsid = cpu_to_le32(ns->ns_id),
+		.nvm_resp.resp = cpu_to_le64(resp),
+	};
+
+	return nvme_submit_sync_cmd(q, (struct nvme_command *)&c, NULL, 0);
+}
+
+static int nvme_nvm_get_l2p_tbl(struct request_queue *q, u64 slba, u64 nlb,
+				nvm_l2p_update_fn *update_l2p, void *priv)
+{
+	struct nvme_ns *ns = q->queuedata;
+	struct nvme_dev *dev = ns->dev;
+	struct nvme_nvm_command c = {
+		.nvm_l2p.opcode = nvme_nvm_admin_get_l2p_tbl,
+		.nvm_l2p.nsid = cpu_to_le32(ns->ns_id),
+	};
+	u32 len = queue_max_hw_sectors(q) << 9;
+	u64 nlb_pr_rq = len / sizeof(u64);
+	u64 cmd_slba = slba;
+	void *entries;
+	int ret = 0;
+
+	entries = kmalloc(len, GFP_KERNEL);
+	if (!entries)
+		return -ENOMEM;
+
+	while (nlb) {
+		u64 cmd_nlb = min_t(u64, nlb_pr_rq, nlb);
+
+		c.nvm_l2p.slba = cmd_slba;
+		c.nvm_l2p.nlb = cmd_nlb;
+
+		ret = nvme_submit_sync_cmd(q, (struct nvme_command *)&c,
+								entries, len);
+		if (ret) {
+			dev_err(dev->dev, "L2P table transfer failed (%d)\n",
+									ret);
+			ret = -EIO;
+			goto out;
+		}
+
+		if (update_l2p(cmd_slba, cmd_nlb, entries, priv)) {
+			ret = -EINTR;
+			goto out;
+		}
+
+		cmd_slba += cmd_nlb;
+		nlb -= cmd_nlb;
+	}
+
+out:
+	kfree(entries);
+	return ret;
+}
+
+static int nvme_nvm_set_bb_tbl(struct request_queue *q, int lunid,
+	unsigned int nr_blocks, nvm_bb_update_fn *update_bbtbl, void *priv)
+{
+	return 0;
+}
+
+static int nvme_nvm_get_bb_tbl(struct request_queue *q, int lunid,
+	unsigned int nr_blocks, nvm_bb_update_fn *update_bbtbl, void *priv)
+{
+	struct nvme_ns *ns = q->queuedata;
+	struct nvme_dev *dev = ns->dev;
+	struct nvme_nvm_command c = {
+		.nvm_get_bb.opcode = nvme_nvm_admin_get_bb_tbl,
+		.nvm_get_bb.nsid = cpu_to_le32(ns->ns_id),
+		.nvm_get_bb.lbb = cpu_to_le32(lunid),
+	};
+	void *bb_bitmap;
+	u16 bb_bitmap_size;
+	int ret = 0;
+
+	bb_bitmap_size = ((nr_blocks >> 15) + 1) * PAGE_SIZE;
+	bb_bitmap = kmalloc(bb_bitmap_size, GFP_KERNEL);
+	if (!bb_bitmap)
+		return -ENOMEM;
+
+	bitmap_zero(bb_bitmap, nr_blocks);
+
+	ret = nvme_submit_sync_cmd(q, (struct nvme_command *)&c, bb_bitmap,
+								bb_bitmap_size);
+	if (ret) {
+		dev_err(dev->dev, "get bad block table failed (%d)\n", ret);
+		ret = -EIO;
+		goto out;
+	}
+
+	ret = update_bbtbl(lunid, bb_bitmap, nr_blocks, priv);
+	if (ret) {
+		ret = -EINTR;
+		goto out;
+	}
+
+out:
+	kfree(bb_bitmap);
+	return ret;
+}
+
+static inline void nvme_nvm_rqtocmd(struct request *rq, struct nvm_rq *rqd,
+				struct nvme_ns *ns, struct nvme_nvm_command *c)
+{
+	c->nvm_hb_rw.opcode = (rq_data_dir(rq) ?
+				nvme_nvm_cmd_hb_write : nvme_nvm_cmd_hb_read);
+	c->nvm_hb_rw.nsid = cpu_to_le32(ns->ns_id);
+	c->nvm_hb_rw.slba = cpu_to_le64(nvme_block_nr(ns,
+						rqd->bio->bi_iter.bi_sector));
+	c->nvm_hb_rw.length = cpu_to_le16(
+		(blk_rq_bytes(rq) >> ns->lba_shift) - 1);
+
+	if (rqd->npages == 1)
+		c->nvm_hb_rw.phys_addr =
+				cpu_to_le64(nvme_block_nr(ns, rqd->ppa));
+	else
+		c->nvm_hb_rw.phys_addr = cpu_to_le64(rqd->dma_ppa_list);
+}
+
+static void nvme_nvm_end_io(struct request *rq, int error)
+{
+	struct nvm_rq *rqd = rq->end_io_data;
+	struct nvm_tgt_instance *ins = rqd->ins;
+
+	ins->tt->end_io(rqd, error);
+
+	kfree(rq->cmd);
+	blk_mq_free_request(rq);
+}
+
+static int nvme_nvm_submit_io(struct request_queue *q, struct nvm_rq *rqd)
+{
+	struct nvme_ns *ns = q->queuedata;
+	struct request *rq;
+	struct bio *bio = rqd->bio;
+	struct nvme_nvm_command *cmd;
+
+	rq = blk_mq_alloc_request(q, bio_rw(bio), GFP_KERNEL, 0);
+	if (IS_ERR(rq))
+		return -ENOMEM;
+
+	cmd = kzalloc(sizeof(struct nvme_nvm_command), GFP_KERNEL);
+	if (!cmd) {
+		blk_mq_free_request(rq);
+		return -ENOMEM;
+	}
+
+	rq->cmd_type = REQ_TYPE_DRV_PRIV;
+	rq->ioprio = bio_prio(bio);
+
+	if (bio_has_data(bio))
+		rq->nr_phys_segments = bio_phys_segments(q, bio);
+
+	rq->__data_len = bio->bi_iter.bi_size;
+	rq->bio = rq->biotail = bio;
+
+	nvme_nvm_rqtocmd(rq, rqd, ns, cmd);
+
+	rq->cmd = (unsigned char *)cmd;
+	rq->cmd_len = sizeof(struct nvme_nvm_command);
+	rq->special = (void *)0;
+
+	rq->end_io_data = rqd;
+
+	blk_execute_rq_nowait(q, NULL, rq, 0, nvme_nvm_end_io);
+
+	return 0;
+}
+
+static int nvme_nvm_erase_block(struct request_queue *q, sector_t block_id)
+{
+	struct nvme_ns *ns = q->queuedata;
+	struct nvme_nvm_command c = {
+		.nvm_erase.opcode = nvme_nvm_cmd_erase,
+		.nvm_erase.nsid = cpu_to_le32(ns->ns_id),
+		.nvm_erase.blk_addr = cpu_to_le64(block_id),
+	};
+
+	return nvme_submit_sync_cmd(q, (struct nvme_command *)&c, NULL, 0);
+}
+
+static void *nvme_nvm_create_ppa_pool(struct request_queue *q)
+{
+	struct nvme_ns *ns = q->queuedata;
+	struct nvme_dev *dev = ns->dev;
+	struct dma_pool *dma_pool;
+
+	dma_pool = dma_pool_create("ppa list", dev->dev, PAGE_SIZE, PAGE_SIZE,
+									0);
+	if (!dma_pool) {
+		dev_err(dev->dev, "Unable to create DMA pool\n");
+		return NULL;
+	}
+
+	return dma_pool;
+}
+
+static void nvme_nvm_destroy_ppa_pool(void *pool)
+{
+	struct dma_pool *dma_pool = pool;
+
+	dma_pool_destroy(dma_pool);
+}
+
+static void *nvme_nvm_alloc_ppalist(struct request_queue *q, void *pool,
+				    gfp_t mem_flags, dma_addr_t *dma_handler)
+{
+	struct nvme_ns *ns = q->queuedata;
+	struct nvme_dev *dev = ns->dev;
+	struct sector_t *ppa_list;
+	struct dma_pool *ppalist_pool = pool;
+
+	ppa_list = dma_pool_alloc(ppalist_pool, mem_flags, dma_handler);
+	if (!ppa_list) {
+		dev_err(dev->dev, "Unable to allocate DMA\n");
+		return NULL;
+	}
+
+	return ppa_list;
+}
+
+static void nvme_nvm_free_ppalist(void *pool, void *ppa_list,
+							dma_addr_t dma_handler)
+{
+	struct dma_pool *ppalist_pool = pool;
+
+	dma_pool_free(ppalist_pool, ppa_list, dma_handler);
+}
+
+static struct nvm_dev_ops nvme_nvm_dev_ops = {
+	.identify		= nvme_nvm_identify,
+
+	.get_features		= nvme_nvm_get_features,
+	.set_responsibility	= nvme_nvm_set_resp,
+
+	.get_l2p_tbl		= nvme_nvm_get_l2p_tbl,
+
+	.set_bb_tbl		= nvme_nvm_set_bb_tbl,
+	.get_bb_tbl		= nvme_nvm_get_bb_tbl,
+
+	.submit_io		= nvme_nvm_submit_io,
+	.erase_block		= nvme_nvm_erase_block,
+
+	.create_ppa_pool	= nvme_nvm_create_ppa_pool,
+	.destroy_ppa_pool	= nvme_nvm_destroy_ppa_pool,
+	.alloc_ppalist		= nvme_nvm_alloc_ppalist,
+	.free_ppalist		= nvme_nvm_free_ppalist,
+
+	.max_phys_sect		= 64,
+};
+
+int nvme_nvm_register(struct request_queue *q, char *disk_name)
+{
+	return nvm_register(q, disk_name, &nvme_nvm_dev_ops);
+}
+
+void nvme_nvm_unregister(char *disk_name)
+{
+	nvm_unregister(disk_name);
+}
+#else
+int nvme_nvm_register(struct request_queue *q, char *disk_name)
+{
+	return 0;
+}
+void nvme_nvm_unregister(char *disk_name) {};
+#endif /* CONFIG_NVM */
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index fa3fe16..bd587b1 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -19,6 +19,7 @@
 #include <linux/pci.h>
 #include <linux/kref.h>
 #include <linux/blk-mq.h>
+#include <linux/lightnvm.h>
 
 struct nvme_bar {
 	__u64			cap;	/* Controller Capabilities */
@@ -41,6 +42,7 @@ struct nvme_bar {
 #define NVME_CAP_STRIDE(cap)	(((cap) >> 32) & 0xf)
 #define NVME_CAP_MPSMIN(cap)	(((cap) >> 48) & 0xf)
 #define NVME_CAP_MPSMAX(cap)	(((cap) >> 52) & 0xf)
+#define NVME_CAP_LIGHTNVM(cap)	(((cap) >> 38) & 0x1)
 
 #define NVME_CMB_BIR(cmbloc)	((cmbloc) & 0x7)
 #define NVME_CMB_OFST(cmbloc)	(((cmbloc) >> 12) & 0xfffff)
@@ -56,6 +58,7 @@ struct nvme_bar {
 enum {
 	NVME_CC_ENABLE		= 1 << 0,
 	NVME_CC_CSS_NVM		= 0 << 4,
+	NVME_CC_CSS_LIGHTNVM	= 1 << 4,
 	NVME_CC_MPS_SHIFT	= 7,
 	NVME_CC_ARB_RR		= 0 << 11,
 	NVME_CC_ARB_WRRU	= 1 << 11,
@@ -138,6 +141,7 @@ struct nvme_ns {
 	u16 ms;
 	bool ext;
 	u8 pi_type;
+	int type;
 	u64 mode_select_num_blocks;
 	u32 mode_select_block_len;
 };
@@ -184,4 +188,6 @@ int nvme_sg_io(struct nvme_ns *ns, struct sg_io_hdr __user *u_hdr);
 int nvme_sg_io32(struct nvme_ns *ns, unsigned long arg);
 int nvme_sg_get_version_num(int __user *ip);
 
+int nvme_nvm_register(struct request_queue *q, char *disk_name);
+void nvme_nvm_unregister(char *disk_name);
 #endif /* _LINUX_NVME_H */
diff --git a/include/uapi/linux/nvme.h b/include/uapi/linux/nvme.h
index 732b32e..0374f11 100644
--- a/include/uapi/linux/nvme.h
+++ b/include/uapi/linux/nvme.h
@@ -130,6 +130,7 @@ struct nvme_id_ns {
 
 enum {
 	NVME_NS_FEAT_THIN	= 1 << 0,
+	NVME_NS_FEAT_NVM	= 1 << 3,
 	NVME_NS_FLBAS_LBA_MASK	= 0xf,
 	NVME_NS_FLBAS_META_EXT	= 0x10,
 	NVME_LBAF_RP_BEST	= 0,
@@ -146,6 +147,8 @@ enum {
 	NVME_NS_DPS_PI_TYPE1	= 1,
 	NVME_NS_DPS_PI_TYPE2	= 2,
 	NVME_NS_DPS_PI_TYPE3	= 3,
+
+	NVME_NS_NVM		= 1,
 };
 
 struct nvme_smart_log {
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v7 5/5] nvme: LightNVM support
@ 2015-08-07 14:29   ` Matias Bjørling
  0 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-08-07 14:29 UTC (permalink / raw)


The first generation of Open-Channel SSDs will be based on NVMe. The
integration requires that a NVMe device exposes itself as a LightNVM
device. The way this is done currently is by hooking into the
Controller Capabilities (CAP register) and a bit in NSFEAT for each
namespace.

After detection, vendor specific codes are used to identify the device
and enumerate supported features.

Signed-off-by: Javier Gonz?lez <jg at lightnvm.io>
Signed-off-by: Matias Bj?rling <mb at lightnvm.io>
---
 drivers/block/Makefile        |   2 +-
 drivers/block/nvme-core.c     |  23 +-
 drivers/block/nvme-lightnvm.c | 568 ++++++++++++++++++++++++++++++++++++++++++
 include/linux/nvme.h          |   6 +
 include/uapi/linux/nvme.h     |   3 +
 5 files changed, 598 insertions(+), 4 deletions(-)
 create mode 100644 drivers/block/nvme-lightnvm.c

diff --git a/drivers/block/Makefile b/drivers/block/Makefile
index 02b688d..a01d7d8 100644
--- a/drivers/block/Makefile
+++ b/drivers/block/Makefile
@@ -44,6 +44,6 @@ obj-$(CONFIG_BLK_DEV_RSXX) += rsxx/
 obj-$(CONFIG_BLK_DEV_NULL_BLK)	+= null_blk.o
 obj-$(CONFIG_ZRAM) += zram/
 
-nvme-y		:= nvme-core.o nvme-scsi.o
+nvme-y		:= nvme-core.o nvme-scsi.o nvme-lightnvm.o
 skd-y		:= skd_main.o
 swim_mod-y	:= swim.o swim_asm.o
diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
index 666e994..e47bd4b 100644
--- a/drivers/block/nvme-core.c
+++ b/drivers/block/nvme-core.c
@@ -40,6 +40,7 @@
 #include <linux/slab.h>
 #include <linux/t10-pi.h>
 #include <linux/types.h>
+#include <linux/lightnvm.h>
 #include <scsi/sg.h>
 #include <asm-generic/io-64-nonatomic-lo-hi.h>
 
@@ -1751,7 +1752,8 @@ static int nvme_configure_admin_queue(struct nvme_dev *dev)
 
 	dev->page_size = 1 << page_shift;
 
-	dev->ctrl_config = NVME_CC_CSS_NVM;
+	dev->ctrl_config = NVME_CAP_LIGHTNVM(cap) ?
+					NVME_CC_CSS_LIGHTNVM : NVME_CC_CSS_NVM;
 	dev->ctrl_config |= (page_shift - 12) << NVME_CC_MPS_SHIFT;
 	dev->ctrl_config |= NVME_CC_ARB_RR | NVME_CC_SHN_NONE;
 	dev->ctrl_config |= NVME_CC_IOSQES | NVME_CC_IOCQES;
@@ -1997,6 +1999,17 @@ static int nvme_revalidate_disk(struct gendisk *disk)
 		return -ENODEV;
 	}
 
+	if ((dev->ctrl_config & NVME_CC_CSS_LIGHTNVM) &&
+		id->nsfeat & NVME_NS_FEAT_NVM && ns->type != NVME_NS_NVM) {
+		if (nvme_nvm_register(ns->queue, disk->disk_name)) {
+			dev_warn(dev->dev,
+				    "%s: LightNVM init failure\n", __func__);
+			kfree(id);
+			return -ENODEV;
+		}
+		ns->type = NVME_NS_NVM;
+	}
+
 	old_ms = ns->ms;
 	lbaf = id->flbas & NVME_NS_FLBAS_LBA_MASK;
 	ns->lba_shift = id->lbaf[lbaf].ds;
@@ -2028,7 +2041,7 @@ static int nvme_revalidate_disk(struct gendisk *disk)
 								!ns->ext)
 		nvme_init_integrity(ns);
 
-	if (ns->ms && !blk_get_integrity(disk))
+	if ((ns->ms && !blk_get_integrity(disk)) || ns->type == NVME_NS_NVM)
 		set_capacity(disk, 0);
 	else
 		set_capacity(disk, le64_to_cpup(&id->nsze) << (ns->lba_shift - 9));
@@ -2146,7 +2159,8 @@ static void nvme_alloc_ns(struct nvme_dev *dev, unsigned nsid)
 	if (nvme_revalidate_disk(ns->disk))
 		goto out_free_disk;
 
-	add_disk(ns->disk);
+	if (ns->type != NVME_NS_NVM)
+		add_disk(ns->disk);
 	if (ns->ms) {
 		struct block_device *bd = bdget_disk(ns->disk, 0);
 		if (!bd)
@@ -2345,6 +2359,9 @@ static void nvme_free_namespace(struct nvme_ns *ns)
 {
 	list_del(&ns->list);
 
+	if (ns->type == NVME_NS_NVM)
+		nvme_nvm_unregister(ns->disk->disk_name);
+
 	spin_lock(&dev_list_lock);
 	ns->disk->private_data = NULL;
 	spin_unlock(&dev_list_lock);
diff --git a/drivers/block/nvme-lightnvm.c b/drivers/block/nvme-lightnvm.c
new file mode 100644
index 0000000..8ad84c9
--- /dev/null
+++ b/drivers/block/nvme-lightnvm.c
@@ -0,0 +1,568 @@
+/*
+ * nvme-lightnvm.c - LightNVM NVMe device
+ *
+ * Copyright (C) 2014-2015 IT University of Copenhagen
+ * Initial release: Matias Bjorling <mb at lightnvm.io>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; see the file COPYING.  If not, write to
+ * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139,
+ * USA.
+ *
+ */
+
+#include <linux/nvme.h>
+#include <linux/bitops.h>
+#include <linux/lightnvm.h>
+
+#ifdef CONFIG_NVM
+
+enum nvme_nvm_opcode {
+	nvme_nvm_cmd_hb_write	= 0x81,
+	nvme_nvm_cmd_hb_read	= 0x02,
+	nvme_nvm_cmd_phys_write	= 0x91,
+	nvme_nvm_cmd_phys_read	= 0x92,
+	nvme_nvm_cmd_erase	= 0x90,
+};
+
+enum nvme_nvm_admin_opcode {
+	nvme_nvm_admin_identify		= 0xe2,
+	nvme_nvm_admin_get_features	= 0xe6,
+	nvme_nvm_admin_set_resp		= 0xe5,
+	nvme_nvm_admin_get_l2p_tbl	= 0xea,
+	nvme_nvm_admin_get_bb_tbl	= 0xf2,
+	nvme_nvm_admin_set_bb_tbl	= 0xf1,
+};
+
+struct nvme_nvm_hb_rw {
+	__u8			opcode;
+	__u8			flags;
+	__u16			command_id;
+	__le32			nsid;
+	__u64			rsvd2;
+	__le64			metadata;
+	__le64			prp1;
+	__le64			prp2;
+	__le64			slba;
+	__le16			length;
+	__le16			control;
+	__le32			dsmgmt;
+	__le64			phys_addr;
+};
+
+struct nvme_nvm_identify {
+	__u8			opcode;
+	__u8			flags;
+	__u16			command_id;
+	__le32			nsid;
+	__u64			rsvd[2];
+	__le64			prp1;
+	__le64			prp2;
+	__le32			chnl_off;
+	__u32			rsvd11[5];
+};
+
+struct nvme_nvm_l2ptbl {
+	__u8			opcode;
+	__u8			flags;
+	__u16			command_id;
+	__le32			nsid;
+	__le32			cdw2[4];
+	__le64			prp1;
+	__le64			prp2;
+	__le64			slba;
+	__le32			nlb;
+	__le16			cdw14[6];
+};
+
+struct nvme_nvm_bbtbl {
+	__u8			opcode;
+	__u8			flags;
+	__u16			command_id;
+	__le32			nsid;
+	__u64			rsvd[2];
+	__le64			prp1;
+	__le64			prp2;
+	__le32			prp1_len;
+	__le32			prp2_len;
+	__le32			lbb;
+	__u32			rsvd11[3];
+};
+
+struct nvme_nvm_set_resp {
+	__u8			opcode;
+	__u8			flags;
+	__u16			command_id;
+	__le32			nsid;
+	__u64			rsvd[2];
+	__le64			prp1;
+	__le64			prp2;
+	__le64			resp;
+	__u32			rsvd11[4];
+};
+
+struct nvme_nvm_erase_blk {
+	__u8			opcode;
+	__u8			flags;
+	__u16			command_id;
+	__le32			nsid;
+	__u64			rsvd[2];
+	__le64			prp1;
+	__le64			prp2;
+	__le64			blk_addr;
+	__u32			rsvd11[4];
+};
+
+struct nvme_nvm_command {
+	union {
+		struct nvme_common_command common;
+		struct nvme_nvm_identify nvm_identify;
+		struct nvme_nvm_hb_rw nvm_hb_rw;
+		struct nvme_nvm_l2ptbl nvm_l2p;
+		struct nvme_nvm_bbtbl nvm_get_bb;
+		struct nvme_nvm_bbtbl nvm_set_bb;
+		struct nvme_nvm_set_resp nvm_resp;
+		struct nvme_nvm_erase_blk nvm_erase;
+	};
+};
+
+/*
+ * Check we didin't inadvertently grow the command struct
+ */
+static inline void _nvme_nvm_check_size(void)
+{
+	BUILD_BUG_ON(sizeof(struct nvme_nvm_identify) != 64);
+	BUILD_BUG_ON(sizeof(struct nvme_nvm_hb_rw) != 64);
+	BUILD_BUG_ON(sizeof(struct nvme_nvm_l2ptbl) != 64);
+	BUILD_BUG_ON(sizeof(struct nvme_nvm_bbtbl) != 64);
+	BUILD_BUG_ON(sizeof(struct nvme_nvm_set_resp) != 64);
+	BUILD_BUG_ON(sizeof(struct nvme_nvm_erase_blk) != 64);
+}
+
+struct nvme_nvm_id_chnl {
+	__le64			laddr_begin;
+	__le64			laddr_end;
+	__le32			oob_size;
+	__le32			queue_size;
+	__le32			gran_read;
+	__le32			gran_write;
+	__le32			gran_erase;
+	__le32			t_r;
+	__le32			t_sqr;
+	__le32			t_w;
+	__le32			t_sqw;
+	__le32			t_e;
+	__le16			chnl_parallelism;
+	__u8			io_sched;
+	__u8			reserved[133];
+} __packed;
+
+struct nvme_nvm_id {
+	__u8			ver_id;
+	__u8			nvm_type;
+	__le16			nchannels;
+	__u8			reserved[252];
+	struct nvme_nvm_id_chnl	chnls[];
+} __packed;
+
+#define NVME_NVM_CHNLS_PR_REQ ((4096U - sizeof(struct nvme_nvm_id)) \
+					/ sizeof(struct nvme_nvm_id_chnl))
+
+
+static int init_chnls(struct request_queue *q, struct nvm_id *nvm_id,
+						struct nvme_nvm_id *nvme_nvm_id)
+{
+	struct nvme_nvm_id_chnl *src = nvme_nvm_id->chnls;
+	struct nvm_id_chnl *dst = nvm_id->chnls;
+	struct nvme_ns *ns = q->queuedata;
+	struct nvme_nvm_command c = {
+		.nvm_identify.opcode = nvme_nvm_admin_identify,
+		.nvm_identify.nsid = cpu_to_le32(ns->ns_id),
+	};
+	unsigned int len = nvm_id->nchannels;
+	int i, end, ret, off = 0;
+
+	while (len) {
+		end = min_t(u32, NVME_NVM_CHNLS_PR_REQ, len);
+
+		for (i = 0; i < end; i++, dst++, src++) {
+			dst->laddr_begin = le64_to_cpu(src->laddr_begin);
+			dst->laddr_end = le64_to_cpu(src->laddr_end);
+			dst->oob_size = le32_to_cpu(src->oob_size);
+			dst->queue_size = le32_to_cpu(src->queue_size);
+			dst->gran_read = le32_to_cpu(src->gran_read);
+			dst->gran_write = le32_to_cpu(src->gran_write);
+			dst->gran_erase = le32_to_cpu(src->gran_erase);
+			dst->t_r = le32_to_cpu(src->t_r);
+			dst->t_sqr = le32_to_cpu(src->t_sqr);
+			dst->t_w = le32_to_cpu(src->t_w);
+			dst->t_sqw = le32_to_cpu(src->t_sqw);
+			dst->t_e = le32_to_cpu(src->t_e);
+			dst->io_sched = src->io_sched;
+		}
+
+		len -= end;
+		if (!len)
+			break;
+
+		off += end;
+
+		c.nvm_identify.chnl_off = off;
+
+		ret = nvme_submit_sync_cmd(q, (struct nvme_command *)&c,
+							nvme_nvm_id, 4096);
+		if (ret)
+			return ret;
+	}
+	return 0;
+}
+
+static int nvme_nvm_identify(struct request_queue *q, struct nvm_id *nvm_id)
+{
+	struct nvme_ns *ns = q->queuedata;
+	struct nvme_nvm_id *nvme_nvm_id;
+	struct nvme_nvm_command c = {
+		.nvm_identify.opcode = nvme_nvm_admin_identify,
+		.nvm_identify.nsid = cpu_to_le32(ns->ns_id),
+		.nvm_identify.chnl_off = 0,
+	};
+	int ret;
+
+	nvme_nvm_id = kmalloc(4096, GFP_KERNEL);
+	if (!nvme_nvm_id)
+		return -ENOMEM;
+
+	ret = nvme_submit_sync_cmd(q, (struct nvme_command *)&c, nvme_nvm_id,
+									4096);
+	if (ret) {
+		ret = -EIO;
+		goto out;
+	}
+
+	nvm_id->ver_id = nvme_nvm_id->ver_id;
+	nvm_id->nvm_type = nvme_nvm_id->nvm_type;
+	nvm_id->nchannels = le16_to_cpu(nvme_nvm_id->nchannels);
+
+	if (!nvm_id->chnls)
+		nvm_id->chnls = kmalloc(sizeof(struct nvm_id_chnl)
+					* nvm_id->nchannels, GFP_KERNEL);
+	if (!nvm_id->chnls) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	ret = init_chnls(q, nvm_id, nvme_nvm_id);
+out:
+	kfree(nvme_nvm_id);
+	return ret;
+}
+
+static int nvme_nvm_get_features(struct request_queue *q,
+						struct nvm_get_features *gf)
+{
+	struct nvme_ns *ns = q->queuedata;
+	struct nvme_nvm_command c = {
+		.common.opcode = nvme_nvm_admin_get_features,
+		.common.nsid = ns->ns_id,
+	};
+	int sz = sizeof(struct nvm_get_features);
+	int ret;
+	u64 *resp;
+
+	resp = kmalloc(sz, GFP_KERNEL);
+	if (!resp)
+		return -ENOMEM;
+
+	ret = nvme_submit_sync_cmd(q, (struct nvme_command *)&c, resp, sz);
+	if (ret)
+		goto done;
+
+	gf->rsp = le64_to_cpu(resp[0]);
+	gf->ext = le64_to_cpu(resp[1]);
+
+done:
+	kfree(resp);
+	return ret;
+}
+
+static int nvme_nvm_set_resp(struct request_queue *q, u64 resp)
+{
+	struct nvme_ns *ns = q->queuedata;
+	struct nvme_nvm_command c = {
+		.nvm_resp.opcode = nvme_nvm_admin_set_resp,
+		.nvm_resp.nsid = cpu_to_le32(ns->ns_id),
+		.nvm_resp.resp = cpu_to_le64(resp),
+	};
+
+	return nvme_submit_sync_cmd(q, (struct nvme_command *)&c, NULL, 0);
+}
+
+static int nvme_nvm_get_l2p_tbl(struct request_queue *q, u64 slba, u64 nlb,
+				nvm_l2p_update_fn *update_l2p, void *priv)
+{
+	struct nvme_ns *ns = q->queuedata;
+	struct nvme_dev *dev = ns->dev;
+	struct nvme_nvm_command c = {
+		.nvm_l2p.opcode = nvme_nvm_admin_get_l2p_tbl,
+		.nvm_l2p.nsid = cpu_to_le32(ns->ns_id),
+	};
+	u32 len = queue_max_hw_sectors(q) << 9;
+	u64 nlb_pr_rq = len / sizeof(u64);
+	u64 cmd_slba = slba;
+	void *entries;
+	int ret = 0;
+
+	entries = kmalloc(len, GFP_KERNEL);
+	if (!entries)
+		return -ENOMEM;
+
+	while (nlb) {
+		u64 cmd_nlb = min_t(u64, nlb_pr_rq, nlb);
+
+		c.nvm_l2p.slba = cmd_slba;
+		c.nvm_l2p.nlb = cmd_nlb;
+
+		ret = nvme_submit_sync_cmd(q, (struct nvme_command *)&c,
+								entries, len);
+		if (ret) {
+			dev_err(dev->dev, "L2P table transfer failed (%d)\n",
+									ret);
+			ret = -EIO;
+			goto out;
+		}
+
+		if (update_l2p(cmd_slba, cmd_nlb, entries, priv)) {
+			ret = -EINTR;
+			goto out;
+		}
+
+		cmd_slba += cmd_nlb;
+		nlb -= cmd_nlb;
+	}
+
+out:
+	kfree(entries);
+	return ret;
+}
+
+static int nvme_nvm_set_bb_tbl(struct request_queue *q, int lunid,
+	unsigned int nr_blocks, nvm_bb_update_fn *update_bbtbl, void *priv)
+{
+	return 0;
+}
+
+static int nvme_nvm_get_bb_tbl(struct request_queue *q, int lunid,
+	unsigned int nr_blocks, nvm_bb_update_fn *update_bbtbl, void *priv)
+{
+	struct nvme_ns *ns = q->queuedata;
+	struct nvme_dev *dev = ns->dev;
+	struct nvme_nvm_command c = {
+		.nvm_get_bb.opcode = nvme_nvm_admin_get_bb_tbl,
+		.nvm_get_bb.nsid = cpu_to_le32(ns->ns_id),
+		.nvm_get_bb.lbb = cpu_to_le32(lunid),
+	};
+	void *bb_bitmap;
+	u16 bb_bitmap_size;
+	int ret = 0;
+
+	bb_bitmap_size = ((nr_blocks >> 15) + 1) * PAGE_SIZE;
+	bb_bitmap = kmalloc(bb_bitmap_size, GFP_KERNEL);
+	if (!bb_bitmap)
+		return -ENOMEM;
+
+	bitmap_zero(bb_bitmap, nr_blocks);
+
+	ret = nvme_submit_sync_cmd(q, (struct nvme_command *)&c, bb_bitmap,
+								bb_bitmap_size);
+	if (ret) {
+		dev_err(dev->dev, "get bad block table failed (%d)\n", ret);
+		ret = -EIO;
+		goto out;
+	}
+
+	ret = update_bbtbl(lunid, bb_bitmap, nr_blocks, priv);
+	if (ret) {
+		ret = -EINTR;
+		goto out;
+	}
+
+out:
+	kfree(bb_bitmap);
+	return ret;
+}
+
+static inline void nvme_nvm_rqtocmd(struct request *rq, struct nvm_rq *rqd,
+				struct nvme_ns *ns, struct nvme_nvm_command *c)
+{
+	c->nvm_hb_rw.opcode = (rq_data_dir(rq) ?
+				nvme_nvm_cmd_hb_write : nvme_nvm_cmd_hb_read);
+	c->nvm_hb_rw.nsid = cpu_to_le32(ns->ns_id);
+	c->nvm_hb_rw.slba = cpu_to_le64(nvme_block_nr(ns,
+						rqd->bio->bi_iter.bi_sector));
+	c->nvm_hb_rw.length = cpu_to_le16(
+		(blk_rq_bytes(rq) >> ns->lba_shift) - 1);
+
+	if (rqd->npages == 1)
+		c->nvm_hb_rw.phys_addr =
+				cpu_to_le64(nvme_block_nr(ns, rqd->ppa));
+	else
+		c->nvm_hb_rw.phys_addr = cpu_to_le64(rqd->dma_ppa_list);
+}
+
+static void nvme_nvm_end_io(struct request *rq, int error)
+{
+	struct nvm_rq *rqd = rq->end_io_data;
+	struct nvm_tgt_instance *ins = rqd->ins;
+
+	ins->tt->end_io(rqd, error);
+
+	kfree(rq->cmd);
+	blk_mq_free_request(rq);
+}
+
+static int nvme_nvm_submit_io(struct request_queue *q, struct nvm_rq *rqd)
+{
+	struct nvme_ns *ns = q->queuedata;
+	struct request *rq;
+	struct bio *bio = rqd->bio;
+	struct nvme_nvm_command *cmd;
+
+	rq = blk_mq_alloc_request(q, bio_rw(bio), GFP_KERNEL, 0);
+	if (IS_ERR(rq))
+		return -ENOMEM;
+
+	cmd = kzalloc(sizeof(struct nvme_nvm_command), GFP_KERNEL);
+	if (!cmd) {
+		blk_mq_free_request(rq);
+		return -ENOMEM;
+	}
+
+	rq->cmd_type = REQ_TYPE_DRV_PRIV;
+	rq->ioprio = bio_prio(bio);
+
+	if (bio_has_data(bio))
+		rq->nr_phys_segments = bio_phys_segments(q, bio);
+
+	rq->__data_len = bio->bi_iter.bi_size;
+	rq->bio = rq->biotail = bio;
+
+	nvme_nvm_rqtocmd(rq, rqd, ns, cmd);
+
+	rq->cmd = (unsigned char *)cmd;
+	rq->cmd_len = sizeof(struct nvme_nvm_command);
+	rq->special = (void *)0;
+
+	rq->end_io_data = rqd;
+
+	blk_execute_rq_nowait(q, NULL, rq, 0, nvme_nvm_end_io);
+
+	return 0;
+}
+
+static int nvme_nvm_erase_block(struct request_queue *q, sector_t block_id)
+{
+	struct nvme_ns *ns = q->queuedata;
+	struct nvme_nvm_command c = {
+		.nvm_erase.opcode = nvme_nvm_cmd_erase,
+		.nvm_erase.nsid = cpu_to_le32(ns->ns_id),
+		.nvm_erase.blk_addr = cpu_to_le64(block_id),
+	};
+
+	return nvme_submit_sync_cmd(q, (struct nvme_command *)&c, NULL, 0);
+}
+
+static void *nvme_nvm_create_ppa_pool(struct request_queue *q)
+{
+	struct nvme_ns *ns = q->queuedata;
+	struct nvme_dev *dev = ns->dev;
+	struct dma_pool *dma_pool;
+
+	dma_pool = dma_pool_create("ppa list", dev->dev, PAGE_SIZE, PAGE_SIZE,
+									0);
+	if (!dma_pool) {
+		dev_err(dev->dev, "Unable to create DMA pool\n");
+		return NULL;
+	}
+
+	return dma_pool;
+}
+
+static void nvme_nvm_destroy_ppa_pool(void *pool)
+{
+	struct dma_pool *dma_pool = pool;
+
+	dma_pool_destroy(dma_pool);
+}
+
+static void *nvme_nvm_alloc_ppalist(struct request_queue *q, void *pool,
+				    gfp_t mem_flags, dma_addr_t *dma_handler)
+{
+	struct nvme_ns *ns = q->queuedata;
+	struct nvme_dev *dev = ns->dev;
+	struct sector_t *ppa_list;
+	struct dma_pool *ppalist_pool = pool;
+
+	ppa_list = dma_pool_alloc(ppalist_pool, mem_flags, dma_handler);
+	if (!ppa_list) {
+		dev_err(dev->dev, "Unable to allocate DMA\n");
+		return NULL;
+	}
+
+	return ppa_list;
+}
+
+static void nvme_nvm_free_ppalist(void *pool, void *ppa_list,
+							dma_addr_t dma_handler)
+{
+	struct dma_pool *ppalist_pool = pool;
+
+	dma_pool_free(ppalist_pool, ppa_list, dma_handler);
+}
+
+static struct nvm_dev_ops nvme_nvm_dev_ops = {
+	.identify		= nvme_nvm_identify,
+
+	.get_features		= nvme_nvm_get_features,
+	.set_responsibility	= nvme_nvm_set_resp,
+
+	.get_l2p_tbl		= nvme_nvm_get_l2p_tbl,
+
+	.set_bb_tbl		= nvme_nvm_set_bb_tbl,
+	.get_bb_tbl		= nvme_nvm_get_bb_tbl,
+
+	.submit_io		= nvme_nvm_submit_io,
+	.erase_block		= nvme_nvm_erase_block,
+
+	.create_ppa_pool	= nvme_nvm_create_ppa_pool,
+	.destroy_ppa_pool	= nvme_nvm_destroy_ppa_pool,
+	.alloc_ppalist		= nvme_nvm_alloc_ppalist,
+	.free_ppalist		= nvme_nvm_free_ppalist,
+
+	.max_phys_sect		= 64,
+};
+
+int nvme_nvm_register(struct request_queue *q, char *disk_name)
+{
+	return nvm_register(q, disk_name, &nvme_nvm_dev_ops);
+}
+
+void nvme_nvm_unregister(char *disk_name)
+{
+	nvm_unregister(disk_name);
+}
+#else
+int nvme_nvm_register(struct request_queue *q, char *disk_name)
+{
+	return 0;
+}
+void nvme_nvm_unregister(char *disk_name) {};
+#endif /* CONFIG_NVM */
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index fa3fe16..bd587b1 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -19,6 +19,7 @@
 #include <linux/pci.h>
 #include <linux/kref.h>
 #include <linux/blk-mq.h>
+#include <linux/lightnvm.h>
 
 struct nvme_bar {
 	__u64			cap;	/* Controller Capabilities */
@@ -41,6 +42,7 @@ struct nvme_bar {
 #define NVME_CAP_STRIDE(cap)	(((cap) >> 32) & 0xf)
 #define NVME_CAP_MPSMIN(cap)	(((cap) >> 48) & 0xf)
 #define NVME_CAP_MPSMAX(cap)	(((cap) >> 52) & 0xf)
+#define NVME_CAP_LIGHTNVM(cap)	(((cap) >> 38) & 0x1)
 
 #define NVME_CMB_BIR(cmbloc)	((cmbloc) & 0x7)
 #define NVME_CMB_OFST(cmbloc)	(((cmbloc) >> 12) & 0xfffff)
@@ -56,6 +58,7 @@ struct nvme_bar {
 enum {
 	NVME_CC_ENABLE		= 1 << 0,
 	NVME_CC_CSS_NVM		= 0 << 4,
+	NVME_CC_CSS_LIGHTNVM	= 1 << 4,
 	NVME_CC_MPS_SHIFT	= 7,
 	NVME_CC_ARB_RR		= 0 << 11,
 	NVME_CC_ARB_WRRU	= 1 << 11,
@@ -138,6 +141,7 @@ struct nvme_ns {
 	u16 ms;
 	bool ext;
 	u8 pi_type;
+	int type;
 	u64 mode_select_num_blocks;
 	u32 mode_select_block_len;
 };
@@ -184,4 +188,6 @@ int nvme_sg_io(struct nvme_ns *ns, struct sg_io_hdr __user *u_hdr);
 int nvme_sg_io32(struct nvme_ns *ns, unsigned long arg);
 int nvme_sg_get_version_num(int __user *ip);
 
+int nvme_nvm_register(struct request_queue *q, char *disk_name);
+void nvme_nvm_unregister(char *disk_name);
 #endif /* _LINUX_NVME_H */
diff --git a/include/uapi/linux/nvme.h b/include/uapi/linux/nvme.h
index 732b32e..0374f11 100644
--- a/include/uapi/linux/nvme.h
+++ b/include/uapi/linux/nvme.h
@@ -130,6 +130,7 @@ struct nvme_id_ns {
 
 enum {
 	NVME_NS_FEAT_THIN	= 1 << 0,
+	NVME_NS_FEAT_NVM	= 1 << 3,
 	NVME_NS_FLBAS_LBA_MASK	= 0xf,
 	NVME_NS_FLBAS_META_EXT	= 0x10,
 	NVME_LBAF_RP_BEST	= 0,
@@ -146,6 +147,8 @@ enum {
 	NVME_NS_DPS_PI_TYPE1	= 1,
 	NVME_NS_DPS_PI_TYPE2	= 2,
 	NVME_NS_DPS_PI_TYPE3	= 3,
+
+	NVME_NS_NVM		= 1,
 };
 
 struct nvme_smart_log {
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH v7 1/5] lightnvm: Support for Open-Channel SSDs
  2015-08-07 14:29   ` Matias Bjørling
  (?)
@ 2015-09-02  3:50     ` Dongsheng Yang
  -1 siblings, 0 replies; 33+ messages in thread
From: Dongsheng Yang @ 2015-09-02  3:50 UTC (permalink / raw)
  To: Matias Bjørling, hch, axboe, linux-fsdevel, linux-kernel,
	linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

On 08/07/2015 10:29 PM, Matias Bjørling wrote:
> Open-channel SSDs are devices that share responsibilities with the host
> in order to implement and maintain features that typical SSDs keep
> strictly in firmware. These include (i) the Flash Translation Layer
> (FTL), (ii) bad block management, and (iii) hardware units such as the
> flash controller, the interface controller, and large amounts of flash
> chips. In this way, Open-channels SSDs exposes direct access to their
> physical flash storage, while keeping a subset of the internal features
> of SSDs.
>
> LightNVM is a specification that gives support to Open-channel SSDs
> LightNVM allows the host to manage data placement, garbage collection,
> and parallelism. Device specific responsibilities such as bad block
> management, FTL extensions to support atomic IOs, or metadata
> persistence are still handled by the device.
>
> The implementation of LightNVM consists of two parts: core and
> (multiple) targets. The core implements functionality shared across
> targets. This is initialization, teardown and statistics. The targets
> implement the interface that exposes physical flash to user-space
> applications. Examples of such targets include key-value store,
> object-store, as well as traditional block devices, which can be
> application-specific.
>
> Contributions in this patch from:
>
>    Javier Gonzalez <jg@lightnvm.io>
>    Jesper Madsen <jmad@itu.dk>
>
> Signed-off-by: Matias Bjørling <mb@lightnvm.io>
> ---
>   MAINTAINERS               |   8 +
>   drivers/Kconfig           |   2 +
>   drivers/Makefile          |   5 +
>   drivers/lightnvm/Kconfig  |  16 ++
>   drivers/lightnvm/Makefile |   5 +
>   drivers/lightnvm/core.c   | 590 ++++++++++++++++++++++++++++++++++++++++++++++
>   include/linux/lightnvm.h  | 335 ++++++++++++++++++++++++++
>   7 files changed, 961 insertions(+)
>   create mode 100644 drivers/lightnvm/Kconfig
>   create mode 100644 drivers/lightnvm/Makefile
>   create mode 100644 drivers/lightnvm/core.c
>   create mode 100644 include/linux/lightnvm.h
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 2d3d55c..d149104 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -6162,6 +6162,14 @@ S:	Supported
>   F:	drivers/nvdimm/pmem.c
>   F:	include/linux/pmem.h
>
> +LIGHTNVM PLATFORM SUPPORT
> +M:	Matias Bjorling <mb@lightnvm.io>
> +W:	http://github/OpenChannelSSD
> +S:	Maintained
> +F:	drivers/lightnvm/
> +F:	include/linux/lightnvm.h
> +F:	include/uapi/linux/lightnvm.h
> +
>   LINUX FOR IBM pSERIES (RS/6000)
>   M:	Paul Mackerras <paulus@au.ibm.com>
>   W:	http://www.ibm.com/linux/ltc/projects/ppc
> diff --git a/drivers/Kconfig b/drivers/Kconfig
> index 6e973b8..3992902 100644
> --- a/drivers/Kconfig
> +++ b/drivers/Kconfig
> @@ -42,6 +42,8 @@ source "drivers/net/Kconfig"
>
>   source "drivers/isdn/Kconfig"
>
> +source "drivers/lightnvm/Kconfig"
> +
>   # input before char - char/joystick depends on it. As does USB.
>
>   source "drivers/input/Kconfig"
> diff --git a/drivers/Makefile b/drivers/Makefile
> index b64b49f..75978ab 100644
> --- a/drivers/Makefile
> +++ b/drivers/Makefile
> @@ -63,6 +63,10 @@ obj-$(CONFIG_FB_I810)           += video/fbdev/i810/
>   obj-$(CONFIG_FB_INTEL)          += video/fbdev/intelfb/
>
>   obj-$(CONFIG_PARPORT)		+= parport/
> +
> +# lightnvm/ comes before block to initialize bm before usage
> +obj-$(CONFIG_NVM)		+= lightnvm/
> +
>   obj-y				+= base/ block/ misc/ mfd/ nfc/
>   obj-$(CONFIG_LIBNVDIMM)		+= nvdimm/
>   obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf/
> @@ -165,3 +169,4 @@ obj-$(CONFIG_RAS)		+= ras/
>   obj-$(CONFIG_THUNDERBOLT)	+= thunderbolt/
>   obj-$(CONFIG_CORESIGHT)		+= hwtracing/coresight/
>   obj-$(CONFIG_ANDROID)		+= android/
> +
> diff --git a/drivers/lightnvm/Kconfig b/drivers/lightnvm/Kconfig
> new file mode 100644
> index 0000000..1f8412c
> --- /dev/null
> +++ b/drivers/lightnvm/Kconfig
> @@ -0,0 +1,16 @@
> +#
> +# Open-Channel SSD NVM configuration
> +#
> +
> +menuconfig NVM
> +	bool "Open-Channel SSD target support"
> +	depends on BLOCK
> +	help
> +	  Say Y here to get to enable Open-channel SSDs.
> +
> +	  Open-Channel SSDs implement a set of extension to SSDs, that
> +	  exposes direct access to the underlying non-volatile memory.
> +
> +	  If you say N, all options in this submenu will be skipped and disabled
> +	  only do this if you know what you are doing.
> +
> diff --git a/drivers/lightnvm/Makefile b/drivers/lightnvm/Makefile
> new file mode 100644
> index 0000000..38185e9
> --- /dev/null
> +++ b/drivers/lightnvm/Makefile
> @@ -0,0 +1,5 @@
> +#
> +# Makefile for Open-Channel SSDs.
> +#
> +
> +obj-$(CONFIG_NVM)		:= core.o
> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
> new file mode 100644
> index 0000000..6499922
> --- /dev/null
> +++ b/drivers/lightnvm/core.c
> @@ -0,0 +1,590 @@
> +/*
> + * Copyright (C) 2015 IT University of Copenhagen
> + * Initial release: Matias Bjorling <mabj@itu.dk>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License version
> + * 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful, but
> + * WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; see the file COPYING.  If not, write to
> + * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139,
> + * USA.
> + *
> + */
> +
> +#include <linux/blkdev.h>
> +#include <linux/blk-mq.h>
> +#include <linux/list.h>
> +#include <linux/types.h>
> +#include <linux/sem.h>
> +#include <linux/bitmap.h>
> +#include <linux/module.h>
> +
> +#include <linux/lightnvm.h>
> +
> +static LIST_HEAD(nvm_targets);
> +static LIST_HEAD(nvm_bms);
> +static LIST_HEAD(nvm_devices);
> +static DECLARE_RWSEM(nvm_lock);
> +
> +struct nvm_tgt_type *nvm_find_target_type(const char *name)
> +{
> +	struct nvm_tgt_type *tt;
> +
> +	list_for_each_entry(tt, &nvm_targets, list)
> +		if (!strcmp(name, tt->name))
> +			return tt;
> +
> +	return NULL;
> +}
> +
> +int nvm_register_target(struct nvm_tgt_type *tt)
> +{
> +	int ret = 0;
> +
> +	down_write(&nvm_lock);
> +	if (nvm_find_target_type(tt->name))
> +		ret = -EEXIST;
> +	else
> +		list_add(&tt->list, &nvm_targets);
> +	up_write(&nvm_lock);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(nvm_register_target);
> +
> +void nvm_unregister_target(struct nvm_tgt_type *tt)
> +{
> +	if (!tt)
> +		return;
> +
> +	down_write(&nvm_lock);
> +	list_del(&tt->list);
> +	up_write(&nvm_lock);
> +}
> +EXPORT_SYMBOL(nvm_unregister_target);
> +
> +void *nvm_alloc_ppalist(struct nvm_dev *dev, gfp_t mem_flags,
> +							dma_addr_t *dma_handler)
> +{
> +	return dev->ops->alloc_ppalist(dev->q, dev->ppalist_pool, mem_flags,
> +								dma_handler);
> +}
> +EXPORT_SYMBOL(nvm_alloc_ppalist);
> +
> +void nvm_free_ppalist(struct nvm_dev *dev, void *ppa_list,
> +							dma_addr_t dma_handler)
> +{
> +	dev->ops->free_ppalist(dev->ppalist_pool, ppa_list, dma_handler);
> +}
> +EXPORT_SYMBOL(nvm_free_ppalist);
> +
> +struct nvm_bm_type *nvm_find_bm_type(const char *name)
> +{
> +	struct nvm_bm_type *bt;
> +
> +	list_for_each_entry(bt, &nvm_bms, list)
> +		if (!strcmp(name, bt->name))
> +			return bt;
> +
> +	return NULL;
> +}
> +
> +int nvm_register_bm(struct nvm_bm_type *bt)
> +{
> +	int ret = 0;
> +
> +	down_write(&nvm_lock);
> +	if (nvm_find_bm_type(bt->name))
> +		ret = -EEXIST;
> +	else
> +		list_add(&bt->list, &nvm_bms);
> +	up_write(&nvm_lock);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(nvm_register_bm);
> +
> +void nvm_unregister_bm(struct nvm_bm_type *bt)
> +{
> +	if (!bt)
> +		return;
> +
> +	down_write(&nvm_lock);
> +	list_del(&bt->list);
> +	up_write(&nvm_lock);
> +}
> +EXPORT_SYMBOL(nvm_unregister_bm);
> +
> +struct nvm_dev *nvm_find_nvm_dev(const char *name)
> +{
> +	struct nvm_dev *dev;
> +
> +	list_for_each_entry(dev, &nvm_devices, devices)
> +		if (!strcmp(name, dev->name))
> +			return dev;
> +
> +	return NULL;
> +}
> +
> +struct nvm_block *nvm_get_blk(struct nvm_dev *dev, struct nvm_lun *lun,
> +							unsigned long flags)
> +{
> +	return dev->bm->get_blk(dev, lun, flags);
> +}
> +EXPORT_SYMBOL(nvm_get_blk);
> +
> +/* Assumes that all valid pages have already been moved on release to bm */
> +void nvm_put_blk(struct nvm_dev *dev, struct nvm_block *blk)
> +{
> +	return dev->bm->put_blk(dev, blk);
> +}
> +EXPORT_SYMBOL(nvm_put_blk);
> +
> +int nvm_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd)
> +{
> +	return dev->ops->submit_io(dev->q, rqd);
> +}
> +EXPORT_SYMBOL(nvm_submit_io);
> +
> +/* Send erase command to device */
> +int nvm_erase_blk(struct nvm_dev *dev, struct nvm_block *blk)
> +{
> +	return dev->bm->erase_blk(dev, blk);
> +}
> +EXPORT_SYMBOL(nvm_erase_blk);
> +
> +static void nvm_core_free(struct nvm_dev *dev)
> +{
> +	kfree(dev->identity.chnls);
> +	kfree(dev);
> +}
> +
> +static int nvm_core_init(struct nvm_dev *dev)
> +{
> +	dev->nr_luns = dev->identity.nchannels;
> +	dev->sector_size = EXPOSED_PAGE_SIZE;
> +	INIT_LIST_HEAD(&dev->online_targets);
> +
> +	return 0;
> +}
> +
> +static void nvm_free(struct nvm_dev *dev)
> +{
> +	if (!dev)
> +		return;
> +
> +	if (dev->bm)
> +		dev->bm->unregister_bm(dev);
> +
> +	nvm_core_free(dev);
> +}
> +
> +int nvm_validate_features(struct nvm_dev *dev)
> +{
> +	struct nvm_get_features gf;
> +	int ret;
> +
> +	ret = dev->ops->get_features(dev->q, &gf);
> +	if (ret)
> +		return ret;
> +
> +	dev->features = gf;
> +
> +	return 0;
> +}
> +
> +int nvm_validate_responsibility(struct nvm_dev *dev)
> +{
> +	if (!dev->ops->set_responsibility)
> +		return 0;
> +
> +	return dev->ops->set_responsibility(dev->q, 0);
> +}
> +
> +int nvm_init(struct nvm_dev *dev)
> +{
> +	struct nvm_bm_type *bt;
> +	int ret = 0;
> +
> +	if (!dev->q || !dev->ops)
> +		return -EINVAL;
> +
> +	if (dev->ops->identify(dev->q, &dev->identity)) {
> +		pr_err("nvm: device could not be identified\n");
> +		ret = -EINVAL;
> +		goto err;
> +	}
> +
> +	pr_debug("nvm dev: ver %u type %u chnls %u\n",
> +			dev->identity.ver_id,
> +			dev->identity.nvm_type,
> +			dev->identity.nchannels);
> +
> +	ret = nvm_validate_features(dev);
> +	if (ret) {
> +		pr_err("nvm: disk features are not supported.");
> +		goto err;
> +	}
> +
> +	ret = nvm_validate_responsibility(dev);
> +	if (ret) {
> +		pr_err("nvm: disk responsibilities are not supported.");
> +		goto err;
> +	}
> +
> +	ret = nvm_core_init(dev);
> +	if (ret) {
> +		pr_err("nvm: could not initialize core structures.\n");
> +		goto err;
> +	}
> +
> +	if (!dev->nr_luns) {
> +		pr_err("nvm: device did not expose any luns.\n");
> +		goto err;
> +	}
> +
> +	/* register with device with a supported BM */
> +	list_for_each_entry(bt, &nvm_bms, list) {
> +		ret = bt->register_bm(dev);
> +		if (ret < 0)
> +			goto err; /* initialization failed */
> +		if (ret > 0) {
> +			dev->bm = bt;
> +			break; /* successfully initialized */
> +		}
> +	}

Why just search it from head to tail? Can user specific it
in nvm_create_target()?
> +
> +	if (!ret) {
> +		pr_info("nvm: no compatible bm was found.\n");
> +		return 0;
> +	}

If we allow nvm_device registered with no bm, we would get
a NULL pointer reference problem in later using.

As mentioned above, why we have to choose bm for nvm in nvm_register?

Thanx
Yang
> +
> +	pr_info("nvm: registered %s with luns: %u blocks: %lu sector size: %d\n",
> +		dev->name, dev->nr_luns, dev->total_blocks, dev->sector_size);
> +
> +	return 0;
> +err:
> +	nvm_free(dev);
> +	pr_err("nvm: failed to initialize nvm\n");
> +	return ret;
> +}
> +
> +void nvm_exit(struct nvm_dev *dev)
> +{
> +	if (dev->ppalist_pool)
> +		dev->ops->destroy_ppa_pool(dev->ppalist_pool);
> +	nvm_free(dev);
> +
> +	pr_info("nvm: successfully unloaded\n");
> +}
> +
> +static const struct block_device_operations nvm_fops = {
> +	.owner		= THIS_MODULE,
> +};
> +
> +static int nvm_create_target(struct nvm_dev *dev, char *ttname, char *tname,
> +						int lun_begin, int lun_end)
> +{
> +	struct request_queue *tqueue;
> +	struct gendisk *tdisk;
> +	struct nvm_tgt_type *tt;
> +	struct nvm_target *t;
> +	void *targetdata;
> +
> +	tt = nvm_find_target_type(ttname);
> +	if (!tt) {
> +		pr_err("nvm: target type %s not found\n", ttname);
> +		return -EINVAL;
> +	}
> +
> +	down_write(&nvm_lock);
> +	list_for_each_entry(t, &dev->online_targets, list) {
> +		if (!strcmp(tname, t->disk->disk_name)) {
> +			pr_err("nvm: target name already exists.\n");
> +			up_write(&nvm_lock);
> +			return -EINVAL;
> +		}
> +	}
> +	up_write(&nvm_lock);
> +
> +	t = kmalloc(sizeof(struct nvm_target), GFP_KERNEL);
> +	if (!t)
> +		return -ENOMEM;
> +
> +	tqueue = blk_alloc_queue_node(GFP_KERNEL, dev->q->node);
> +	if (!tqueue)
> +		goto err_t;
> +	blk_queue_make_request(tqueue, tt->make_rq);
> +
> +	tdisk = alloc_disk(0);
> +	if (!tdisk)
> +		goto err_queue;
> +
> +	sprintf(tdisk->disk_name, "%s", tname);
> +	tdisk->flags = GENHD_FL_EXT_DEVT;
> +	tdisk->major = 0;
> +	tdisk->first_minor = 0;
> +	tdisk->fops = &nvm_fops;
> +	tdisk->queue = tqueue;
> +
> +	targetdata = tt->init(dev, tdisk, lun_begin, lun_end);
> +	if (IS_ERR(targetdata))
> +		goto err_init;
> +
> +	tdisk->private_data = targetdata;
> +	tqueue->queuedata = targetdata;
> +
> +	blk_queue_max_hw_sectors(tqueue, 8 * dev->ops->max_phys_sect);
> +
> +	set_capacity(tdisk, tt->capacity(targetdata));
> +	add_disk(tdisk);
> +
> +	t->type = tt;
> +	t->disk = tdisk;
> +
> +	down_write(&nvm_lock);
> +	list_add_tail(&t->list, &dev->online_targets);
> +	up_write(&nvm_lock);
> +
> +	return 0;
> +err_init:
> +	put_disk(tdisk);
> +err_queue:
> +	blk_cleanup_queue(tqueue);
> +err_t:
> +	kfree(t);
> +	return -ENOMEM;
> +}
> +
> +static void nvm_remove_target(struct nvm_target *t)
> +{
> +	struct nvm_tgt_type *tt = t->type;
> +	struct gendisk *tdisk = t->disk;
> +	struct request_queue *q = tdisk->queue;
> +
> +	lockdep_assert_held(&nvm_lock);
> +
> +	del_gendisk(tdisk);
> +	if (tt->exit)
> +		tt->exit(tdisk->private_data);
> +
> +	blk_cleanup_queue(q);
> +
> +	put_disk(tdisk);
> +
> +	list_del(&t->list);
> +	kfree(t);
> +}
> +
> +static int nvm_configure_show(const char *val)
> +{
> +	struct nvm_dev *dev;
> +	char opcode, devname[DISK_NAME_LEN];
> +	int ret;
> +
> +	ret = sscanf(val, "%c %s", &opcode, devname);
> +	if (ret != 2) {
> +		pr_err("nvm: invalid command. Use \"opcode devicename\".\n");
> +		return -EINVAL;
> +	}
> +
> +	dev = nvm_find_nvm_dev(devname);
> +	if (!dev) {
> +		pr_err("nvm: device not found\n");
> +		return -EINVAL;
> +	}
> +
> +	if (!dev->bm)
> +		return 0;
> +
> +	dev->bm->free_blocks_print(dev);
> +
> +	return 0;
> +}
> +
> +static int nvm_configure_del(const char *val)
> +{
> +	struct nvm_target *t = NULL;
> +	struct nvm_dev *dev;
> +	char opcode, tname[255];
> +	int ret;
> +
> +	ret = sscanf(val, "%c %s", &opcode, tname);
> +	if (ret != 2) {
> +		pr_err("nvm: invalid command. Use \"d targetname\".\n");
> +		return -EINVAL;
> +	}
> +
> +	down_write(&nvm_lock);
> +	list_for_each_entry(dev, &nvm_devices, devices)
> +		list_for_each_entry(t, &dev->online_targets, list) {
> +			if (!strcmp(tname, t->disk->disk_name)) {
> +				nvm_remove_target(t);
> +				ret = 0;
> +				break;
> +			}
> +		}
> +	up_write(&nvm_lock);
> +
> +	if (ret) {
> +		pr_err("nvm: target \"%s\" doesn't exist.\n", tname);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int nvm_configure_add(const char *val)
> +{
> +	struct nvm_dev *dev;
> +	char opcode, devname[DISK_NAME_LEN], tgtengine[255], tname[255];
> +	int lun_begin, lun_end, ret;
> +
> +	ret = sscanf(val, "%c %s %s %s %u:%u", &opcode, devname, tgtengine,
> +						tname, &lun_begin, &lun_end);
> +	if (ret != 6) {
> +		pr_err("nvm: invalid command. Use \"opcode device name tgtengine lun_begin:lun_end\".\n");
> +		return -EINVAL;
> +	}
> +
> +	dev = nvm_find_nvm_dev(devname);
> +	if (!dev) {
> +		pr_err("nvm: device not found\n");
> +		return -EINVAL;
> +	}
> +
> +	if (lun_begin > lun_end || lun_end > dev->nr_luns) {
> +		pr_err("nvm: lun out of bound (%u:%u > %u)\n",
> +					lun_begin, lun_end, dev->nr_luns);
> +		return -EINVAL;
> +	}
> +
> +	return nvm_create_target(dev, tname, tgtengine, lun_begin, lun_end);
> +}
> +
> +/* Exposes administrative interface through /sys/module/lnvm/configure_by_str */
> +static int nvm_configure_by_str_event(const char *val,
> +					const struct kernel_param *kp)
> +{
> +	char opcode;
> +	int ret;
> +
> +	ret = sscanf(val, "%c", &opcode);
> +	if (ret != 1) {
> +		pr_err("nvm: configure must be in the format of \"opcode ...\"\n");
> +		return -EINVAL;
> +	}
> +
> +	switch (opcode) {
> +	case 'a':
> +		return nvm_configure_add(val);
> +	case 'd':
> +		return nvm_configure_del(val);
> +	case 's':
> +		return nvm_configure_show(val);
> +	default:
> +		pr_err("nvm: invalid opcode.\n");
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int nvm_configure_get(char *buf, const struct kernel_param *kp)
> +{
> +	int sz = 0;
> +	char *buf_start = buf;
> +	struct nvm_dev *dev;
> +
> +	buf += sprintf(buf, "available devices:\n");
> +	down_write(&nvm_lock);
> +	list_for_each_entry(dev, &nvm_devices, devices) {
> +		if (sz > 4095 - DISK_NAME_LEN)
> +			break;
> +		buf += sprintf(buf, " %s\n", dev->name);
> +	}
> +	up_write(&nvm_lock);
> +
> +	return buf - buf_start - 1;
> +}
> +
> +static const struct kernel_param_ops nvm_configure_by_str_event_param_ops = {
> +	.set	= nvm_configure_by_str_event,
> +	.get	= nvm_configure_get,
> +};
> +
> +#undef MODULE_PARAM_PREFIX
> +#define MODULE_PARAM_PREFIX	"lnvm."
> +
> +module_param_cb(configure_debug, &nvm_configure_by_str_event_param_ops, NULL,
> +									0644);
> +
> +int nvm_register(struct request_queue *q, char *disk_name,
> +							struct nvm_dev_ops *ops)
> +{
> +	struct nvm_dev *dev;
> +	int ret;
> +
> +	if (!ops->identify || !ops->get_features)
> +		return -EINVAL;
> +
> +	dev = kzalloc(sizeof(struct nvm_dev), GFP_KERNEL);
> +	if (!dev)
> +		return -ENOMEM;
> +
> +	dev->q = q;
> +	dev->ops = ops;
> +	strncpy(dev->name, disk_name, DISK_NAME_LEN);
> +
> +	ret = nvm_init(dev);
> +	if (ret)
> +		goto err_init;
> +
> +	down_write(&nvm_lock);
> +	list_add(&dev->devices, &nvm_devices);
> +	up_write(&nvm_lock);
> +
> +	if (dev->ops->max_phys_sect > 256) {
> +		pr_info("nvm: maximum number of sectors supported in target is 255. max_phys_sect set to 255\n");
> +		dev->ops->max_phys_sect = 255;
> +	}
> +
> +	if (dev->ops->max_phys_sect > 1) {
> +		dev->ppalist_pool = dev->ops->create_ppa_pool(dev->q);
> +		if (!dev->ppalist_pool) {
> +			pr_err("nvm: could not create ppa pool\n");
> +			return -ENOMEM;
> +		}
> +	}
> +
> +	return 0;
> +err_init:
> +	kfree(dev);
> +	return ret;
> +}
> +EXPORT_SYMBOL(nvm_register);
> +
> +void nvm_unregister(char *disk_name)
> +{
> +	struct nvm_dev *dev = nvm_find_nvm_dev(disk_name);
> +
> +	if (!dev) {
> +		pr_err("nvm: could not find device %s on unregister\n",
> +								disk_name);
> +		return;
> +	}
> +
> +	nvm_exit(dev);
> +
> +	down_write(&nvm_lock);
> +	list_del(&dev->devices);
> +	up_write(&nvm_lock);
> +}
> +EXPORT_SYMBOL(nvm_unregister);
> diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
> new file mode 100644
> index 0000000..9654354
> --- /dev/null
> +++ b/include/linux/lightnvm.h
> @@ -0,0 +1,335 @@
> +#ifndef NVM_H
> +#define NVM_H
> +
> +enum {
> +	NVM_IO_OK = 0,
> +	NVM_IO_REQUEUE = 1,
> +	NVM_IO_DONE = 2,
> +	NVM_IO_ERR = 3,
> +
> +	NVM_IOTYPE_NONE = 0,
> +	NVM_IOTYPE_GC = 1,
> +};
> +
> +#ifdef CONFIG_NVM
> +
> +#include <linux/blkdev.h>
> +#include <linux/types.h>
> +#include <linux/file.h>
> +#include <linux/dmapool.h>
> +
> +enum {
> +	/* HW Responsibilities */
> +	NVM_RSP_L2P	= 1 << 0,
> +	NVM_RSP_GC	= 1 << 1,
> +	NVM_RSP_ECC	= 1 << 2,
> +
> +	/* Physical NVM Type */
> +	NVM_NVMT_BLK	= 0,
> +	NVM_NVMT_BYTE	= 1,
> +
> +	/* Internal IO Scheduling algorithm */
> +	NVM_IOSCHED_CHANNEL	= 0,
> +	NVM_IOSCHED_CHIP	= 1,
> +
> +	/* Status codes */
> +	NVM_SUCCESS		= 0,
> +	NVM_RSP_NOT_CHANGEABLE	= 1,
> +};
> +
> +struct nvm_id_chnl {
> +	u64	laddr_begin;
> +	u64	laddr_end;
> +	u32	oob_size;
> +	u32	queue_size;
> +	u32	gran_read;
> +	u32	gran_write;
> +	u32	gran_erase;
> +	u32	t_r;
> +	u32	t_sqr;
> +	u32	t_w;
> +	u32	t_sqw;
> +	u32	t_e;
> +	u16	chnl_parallelism;
> +	u8	io_sched;
> +	u8	res[133];
> +};
> +
> +struct nvm_id {
> +	u8	ver_id;
> +	u8	nvm_type;
> +	u16	nchannels;
> +	struct nvm_id_chnl *chnls;
> +};
> +
> +struct nvm_get_features {
> +	u64	rsp;
> +	u64	ext;
> +};
> +
> +struct nvm_target {
> +	struct list_head list;
> +	struct nvm_tgt_type *type;
> +	struct gendisk *disk;
> +};
> +
> +struct nvm_tgt_instance {
> +	struct nvm_tgt_type *tt;
> +};
> +
> +struct nvm_rq {
> +	struct nvm_tgt_instance *ins;
> +	struct bio *bio;
> +	union {
> +		sector_t ppa;
> +		sector_t *ppa_list;
> +	};
> +	/*DMA handler to be used by underlying devices supporting DMA*/
> +	dma_addr_t dma_ppa_list;
> +	uint8_t npages;
> +};
> +
> +static inline struct nvm_rq *nvm_rq_from_pdu(void *pdu)
> +{
> +	return pdu - sizeof(struct nvm_rq);
> +}
> +
> +static inline void *nvm_rq_to_pdu(struct nvm_rq *rqdata)
> +{
> +	return rqdata + 1;
> +}
> +
> +struct nvm_block;
> +
> +typedef int (nvm_l2p_update_fn)(u64, u64, u64 *, void *);
> +typedef int (nvm_bb_update_fn)(u32, void *, unsigned int, void *);
> +typedef int (nvm_id_fn)(struct request_queue *, struct nvm_id *);
> +typedef int (nvm_get_features_fn)(struct request_queue *,
> +						struct nvm_get_features *);
> +typedef int (nvm_set_rsp_fn)(struct request_queue *, u64);
> +typedef int (nvm_get_l2p_tbl_fn)(struct request_queue *, u64, u64,
> +				nvm_l2p_update_fn *, void *);
> +typedef int (nvm_op_bb_tbl_fn)(struct request_queue *, int, unsigned int,
> +				nvm_bb_update_fn *, void *);
> +typedef int (nvm_submit_io_fn)(struct request_queue *, struct nvm_rq *);
> +typedef int (nvm_erase_blk_fn)(struct request_queue *, sector_t);
> +typedef void *(nvm_create_ppapool_fn)(struct request_queue *);
> +typedef void (nvm_destroy_ppapool_fn)(void *);
> +typedef void *(nvm_alloc_ppalist_fn)(struct request_queue *, void *, gfp_t,
> +								dma_addr_t*);
> +typedef void (nvm_free_ppalist_fn)(void *, void*, dma_addr_t);
> +
> +struct nvm_dev_ops {
> +	nvm_id_fn		*identify;
> +	nvm_get_features_fn	*get_features;
> +	nvm_set_rsp_fn		*set_responsibility;
> +	nvm_get_l2p_tbl_fn	*get_l2p_tbl;
> +	nvm_op_bb_tbl_fn	*set_bb_tbl;
> +	nvm_op_bb_tbl_fn	*get_bb_tbl;
> +
> +	nvm_submit_io_fn	*submit_io;
> +	nvm_erase_blk_fn	*erase_block;
> +
> +	nvm_create_ppapool_fn	*create_ppa_pool;
> +	nvm_destroy_ppapool_fn	*destroy_ppa_pool;
> +	nvm_alloc_ppalist_fn	*alloc_ppalist;
> +	nvm_free_ppalist_fn	*free_ppalist;
> +
> +	uint8_t			max_phys_sect;
> +};
> +
> +struct nvm_lun {
> +	int id;
> +
> +	int nr_pages_per_blk;
> +	unsigned int nr_blocks;		/* end_block - start_block. */
> +	unsigned int nr_free_blocks;	/* Number of unused blocks */
> +
> +	struct nvm_block *blocks;
> +
> +	spinlock_t lock;
> +};
> +
> +struct nvm_block {
> +	struct list_head list;
> +	struct nvm_lun *lun;
> +	unsigned long long id;
> +
> +	void *priv;
> +	int type;
> +};
> +
> +struct nvm_dev {
> +	struct nvm_dev_ops *ops;
> +
> +	struct list_head devices;
> +	struct list_head online_targets;
> +
> +	/* Block manager */
> +	struct nvm_bm_type *bm;
> +	void *bmp;
> +
> +	/* Target information */
> +	int nr_luns;
> +
> +	/* Calculated/Cached values. These do not reflect the actual usable
> +	 * blocks at run-time. */
> +	unsigned long total_pages;
> +	unsigned long total_blocks;
> +	unsigned max_pages_per_blk;
> +
> +	uint32_t sector_size;
> +
> +	void *ppalist_pool;
> +
> +	/* Identity */
> +	struct nvm_id identity;
> +	struct nvm_get_features features;
> +
> +	/* Backend device */
> +	struct request_queue *q;
> +	char name[DISK_NAME_LEN];
> +};
> +
> +typedef void (nvm_tgt_make_rq_fn)(struct request_queue *, struct bio *);
> +typedef sector_t (nvm_tgt_capacity_fn)(void *);
> +typedef void (nvm_tgt_end_io_fn)(struct nvm_rq *, int);
> +typedef void *(nvm_tgt_init_fn)(struct nvm_dev *, struct gendisk *, int, int);
> +typedef void (nvm_tgt_exit_fn)(void *);
> +
> +struct nvm_tgt_type {
> +	const char *name;
> +	unsigned int version[3];
> +
> +	/* target entry points */
> +	nvm_tgt_make_rq_fn *make_rq;
> +	nvm_tgt_capacity_fn *capacity;
> +	nvm_tgt_end_io_fn *end_io;
> +
> +	/* module-specific init/teardown */
> +	nvm_tgt_init_fn *init;
> +	nvm_tgt_exit_fn *exit;
> +
> +	/* For internal use */
> +	struct list_head list;
> +};
> +
> +extern int nvm_register_target(struct nvm_tgt_type *);
> +extern void nvm_unregister_target(struct nvm_tgt_type *);
> +
> +extern void *nvm_alloc_ppalist(struct nvm_dev *, gfp_t, dma_addr_t *);
> +extern void nvm_free_ppalist(struct nvm_dev *, void *, dma_addr_t);
> +
> +typedef int (nvm_bm_register_fn)(struct nvm_dev *);
> +typedef void (nvm_bm_unregister_fn)(struct nvm_dev *);
> +typedef struct nvm_block *(nvm_bm_get_blk_fn)(struct nvm_dev *,
> +					      struct nvm_lun *, unsigned long);
> +typedef void (nvm_bm_put_blk_fn)(struct nvm_dev *, struct nvm_block *);
> +typedef int (nvm_bm_open_blk_fn)(struct nvm_dev *, struct nvm_block *);
> +typedef int (nvm_bm_close_blk_fn)(struct nvm_dev *, struct nvm_block *);
> +typedef void (nvm_bm_flush_blk_fn)(struct nvm_dev *, struct nvm_block *);
> +typedef int (nvm_bm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *);
> +typedef void (nvm_bm_end_io_fn)(struct nvm_rq *, int);
> +typedef int (nvm_bm_erase_blk_fn)(struct nvm_dev *, struct nvm_block *);
> +typedef int (nvm_bm_register_prog_err_fn)(struct nvm_dev *,
> +	     void (prog_err_fn)(struct nvm_dev *, struct nvm_block *));
> +typedef int (nvm_bm_save_state_fn)(struct file *);
> +typedef int (nvm_bm_restore_state_fn)(struct file *);
> +typedef struct nvm_lun *(nvm_bm_get_luns_fn)(struct nvm_dev *, int, int);
> +typedef void (nvm_bm_free_blocks_print_fn)(struct nvm_dev *);
> +
> +struct nvm_bm_type {
> +	const char *name;
> +	unsigned int version[3];
> +
> +	nvm_bm_register_fn *register_bm;
> +	nvm_bm_unregister_fn *unregister_bm;
> +
> +	/* Block administration callbacks */
> +	nvm_bm_get_blk_fn *get_blk;
> +	nvm_bm_put_blk_fn *put_blk;
> +	nvm_bm_open_blk_fn *open_blk;
> +	nvm_bm_close_blk_fn *close_blk;
> +	nvm_bm_flush_blk_fn *flush_blk;
> +
> +	nvm_bm_submit_io_fn *submit_io;
> +	nvm_bm_end_io_fn *end_io;
> +	nvm_bm_erase_blk_fn *erase_blk;
> +
> +	/* State management for debugging purposes */
> +	nvm_bm_save_state_fn *save_state;
> +	nvm_bm_restore_state_fn *restore_state;
> +
> +	/* Configuration management */
> +	nvm_bm_get_luns_fn *get_luns;
> +
> +	/* Statistics */
> +	nvm_bm_free_blocks_print_fn *free_blocks_print;
> +	struct list_head list;
> +};
> +
> +extern int nvm_register_bm(struct nvm_bm_type *);
> +extern void nvm_unregister_bm(struct nvm_bm_type *);
> +
> +extern struct nvm_block *nvm_get_blk(struct nvm_dev *, struct nvm_lun *,
> +								unsigned long);
> +extern void nvm_put_blk(struct nvm_dev *, struct nvm_block *);
> +extern int nvm_erase_blk(struct nvm_dev *, struct nvm_block *);
> +
> +extern int nvm_register(struct request_queue *, char *,
> +						struct nvm_dev_ops *);
> +extern void nvm_unregister(char *);
> +
> +extern int nvm_submit_io(struct nvm_dev *, struct nvm_rq *);
> +
> +/* We currently assume that we the lightnvm device is accepting data in 512
> + * bytes chunks. This should be set to the smallest command size available for a
> + * given device.
> + */
> +#define NVM_SECTOR (512)
> +#define EXPOSED_PAGE_SIZE (4096)
> +
> +#define NR_PHY_IN_LOG (EXPOSED_PAGE_SIZE / NVM_SECTOR)
> +
> +#define NVM_MSG_PREFIX "nvm"
> +#define ADDR_EMPTY (~0ULL)
> +
> +static inline unsigned long nvm_get_rq_flags(struct request *rq)
> +{
> +	return (unsigned long)rq->cmd;
> +}
> +
> +#else /* CONFIG_NVM */
> +
> +struct nvm_dev_ops;
> +struct nvm_dev;
> +struct nvm_lun;
> +struct nvm_block;
> +struct nvm_rq {
> +};
> +struct nvm_tgt_type;
> +struct nvm_tgt_instance;
> +
> +static inline struct nvm_tgt_type *nvm_find_target_type(const char *c)
> +{
> +	return NULL;
> +}
> +static inline int nvm_register(struct request_queue *q, char *disk_name,
> +							struct nvm_dev_ops *ops)
> +{
> +	return -EINVAL;
> +}
> +static inline void nvm_unregister(char *disk_name) {}
> +static inline struct nvm_block *nvm_get_blk(struct nvm_dev *dev,
> +				struct nvm_lun *lun, unsigned long flags)
> +{
> +	return NULL;
> +}
> +static inline void nvm_put_blk(struct nvm_dev *dev, struct nvm_block *blk) {}
> +static inline int nvm_erase_blk(struct nvm_dev *dev, struct nvm_block *blk)
> +{
> +	return -EINVAL;
> +}
> +
> +#endif /* CONFIG_NVM */
> +#endif /* LIGHTNVM.H */
>


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v7 1/5] lightnvm: Support for Open-Channel SSDs
@ 2015-09-02  3:50     ` Dongsheng Yang
  0 siblings, 0 replies; 33+ messages in thread
From: Dongsheng Yang @ 2015-09-02  3:50 UTC (permalink / raw)
  To: Matias Bjørling, hch, axboe, linux-fsdevel, linux-kernel,
	linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

On 08/07/2015 10:29 PM, Matias Bjørling wrote:
> Open-channel SSDs are devices that share responsibilities with the host
> in order to implement and maintain features that typical SSDs keep
> strictly in firmware. These include (i) the Flash Translation Layer
> (FTL), (ii) bad block management, and (iii) hardware units such as the
> flash controller, the interface controller, and large amounts of flash
> chips. In this way, Open-channels SSDs exposes direct access to their
> physical flash storage, while keeping a subset of the internal features
> of SSDs.
>
> LightNVM is a specification that gives support to Open-channel SSDs
> LightNVM allows the host to manage data placement, garbage collection,
> and parallelism. Device specific responsibilities such as bad block
> management, FTL extensions to support atomic IOs, or metadata
> persistence are still handled by the device.
>
> The implementation of LightNVM consists of two parts: core and
> (multiple) targets. The core implements functionality shared across
> targets. This is initialization, teardown and statistics. The targets
> implement the interface that exposes physical flash to user-space
> applications. Examples of such targets include key-value store,
> object-store, as well as traditional block devices, which can be
> application-specific.
>
> Contributions in this patch from:
>
>    Javier Gonzalez <jg@lightnvm.io>
>    Jesper Madsen <jmad@itu.dk>
>
> Signed-off-by: Matias Bjørling <mb@lightnvm.io>
> ---
>   MAINTAINERS               |   8 +
>   drivers/Kconfig           |   2 +
>   drivers/Makefile          |   5 +
>   drivers/lightnvm/Kconfig  |  16 ++
>   drivers/lightnvm/Makefile |   5 +
>   drivers/lightnvm/core.c   | 590 ++++++++++++++++++++++++++++++++++++++++++++++
>   include/linux/lightnvm.h  | 335 ++++++++++++++++++++++++++
>   7 files changed, 961 insertions(+)
>   create mode 100644 drivers/lightnvm/Kconfig
>   create mode 100644 drivers/lightnvm/Makefile
>   create mode 100644 drivers/lightnvm/core.c
>   create mode 100644 include/linux/lightnvm.h
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 2d3d55c..d149104 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -6162,6 +6162,14 @@ S:	Supported
>   F:	drivers/nvdimm/pmem.c
>   F:	include/linux/pmem.h
>
> +LIGHTNVM PLATFORM SUPPORT
> +M:	Matias Bjorling <mb@lightnvm.io>
> +W:	http://github/OpenChannelSSD
> +S:	Maintained
> +F:	drivers/lightnvm/
> +F:	include/linux/lightnvm.h
> +F:	include/uapi/linux/lightnvm.h
> +
>   LINUX FOR IBM pSERIES (RS/6000)
>   M:	Paul Mackerras <paulus@au.ibm.com>
>   W:	http://www.ibm.com/linux/ltc/projects/ppc
> diff --git a/drivers/Kconfig b/drivers/Kconfig
> index 6e973b8..3992902 100644
> --- a/drivers/Kconfig
> +++ b/drivers/Kconfig
> @@ -42,6 +42,8 @@ source "drivers/net/Kconfig"
>
>   source "drivers/isdn/Kconfig"
>
> +source "drivers/lightnvm/Kconfig"
> +
>   # input before char - char/joystick depends on it. As does USB.
>
>   source "drivers/input/Kconfig"
> diff --git a/drivers/Makefile b/drivers/Makefile
> index b64b49f..75978ab 100644
> --- a/drivers/Makefile
> +++ b/drivers/Makefile
> @@ -63,6 +63,10 @@ obj-$(CONFIG_FB_I810)           += video/fbdev/i810/
>   obj-$(CONFIG_FB_INTEL)          += video/fbdev/intelfb/
>
>   obj-$(CONFIG_PARPORT)		+= parport/
> +
> +# lightnvm/ comes before block to initialize bm before usage
> +obj-$(CONFIG_NVM)		+= lightnvm/
> +
>   obj-y				+= base/ block/ misc/ mfd/ nfc/
>   obj-$(CONFIG_LIBNVDIMM)		+= nvdimm/
>   obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf/
> @@ -165,3 +169,4 @@ obj-$(CONFIG_RAS)		+= ras/
>   obj-$(CONFIG_THUNDERBOLT)	+= thunderbolt/
>   obj-$(CONFIG_CORESIGHT)		+= hwtracing/coresight/
>   obj-$(CONFIG_ANDROID)		+= android/
> +
> diff --git a/drivers/lightnvm/Kconfig b/drivers/lightnvm/Kconfig
> new file mode 100644
> index 0000000..1f8412c
> --- /dev/null
> +++ b/drivers/lightnvm/Kconfig
> @@ -0,0 +1,16 @@
> +#
> +# Open-Channel SSD NVM configuration
> +#
> +
> +menuconfig NVM
> +	bool "Open-Channel SSD target support"
> +	depends on BLOCK
> +	help
> +	  Say Y here to get to enable Open-channel SSDs.
> +
> +	  Open-Channel SSDs implement a set of extension to SSDs, that
> +	  exposes direct access to the underlying non-volatile memory.
> +
> +	  If you say N, all options in this submenu will be skipped and disabled
> +	  only do this if you know what you are doing.
> +
> diff --git a/drivers/lightnvm/Makefile b/drivers/lightnvm/Makefile
> new file mode 100644
> index 0000000..38185e9
> --- /dev/null
> +++ b/drivers/lightnvm/Makefile
> @@ -0,0 +1,5 @@
> +#
> +# Makefile for Open-Channel SSDs.
> +#
> +
> +obj-$(CONFIG_NVM)		:= core.o
> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
> new file mode 100644
> index 0000000..6499922
> --- /dev/null
> +++ b/drivers/lightnvm/core.c
> @@ -0,0 +1,590 @@
> +/*
> + * Copyright (C) 2015 IT University of Copenhagen
> + * Initial release: Matias Bjorling <mabj@itu.dk>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License version
> + * 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful, but
> + * WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; see the file COPYING.  If not, write to
> + * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139,
> + * USA.
> + *
> + */
> +
> +#include <linux/blkdev.h>
> +#include <linux/blk-mq.h>
> +#include <linux/list.h>
> +#include <linux/types.h>
> +#include <linux/sem.h>
> +#include <linux/bitmap.h>
> +#include <linux/module.h>
> +
> +#include <linux/lightnvm.h>
> +
> +static LIST_HEAD(nvm_targets);
> +static LIST_HEAD(nvm_bms);
> +static LIST_HEAD(nvm_devices);
> +static DECLARE_RWSEM(nvm_lock);
> +
> +struct nvm_tgt_type *nvm_find_target_type(const char *name)
> +{
> +	struct nvm_tgt_type *tt;
> +
> +	list_for_each_entry(tt, &nvm_targets, list)
> +		if (!strcmp(name, tt->name))
> +			return tt;
> +
> +	return NULL;
> +}
> +
> +int nvm_register_target(struct nvm_tgt_type *tt)
> +{
> +	int ret = 0;
> +
> +	down_write(&nvm_lock);
> +	if (nvm_find_target_type(tt->name))
> +		ret = -EEXIST;
> +	else
> +		list_add(&tt->list, &nvm_targets);
> +	up_write(&nvm_lock);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(nvm_register_target);
> +
> +void nvm_unregister_target(struct nvm_tgt_type *tt)
> +{
> +	if (!tt)
> +		return;
> +
> +	down_write(&nvm_lock);
> +	list_del(&tt->list);
> +	up_write(&nvm_lock);
> +}
> +EXPORT_SYMBOL(nvm_unregister_target);
> +
> +void *nvm_alloc_ppalist(struct nvm_dev *dev, gfp_t mem_flags,
> +							dma_addr_t *dma_handler)
> +{
> +	return dev->ops->alloc_ppalist(dev->q, dev->ppalist_pool, mem_flags,
> +								dma_handler);
> +}
> +EXPORT_SYMBOL(nvm_alloc_ppalist);
> +
> +void nvm_free_ppalist(struct nvm_dev *dev, void *ppa_list,
> +							dma_addr_t dma_handler)
> +{
> +	dev->ops->free_ppalist(dev->ppalist_pool, ppa_list, dma_handler);
> +}
> +EXPORT_SYMBOL(nvm_free_ppalist);
> +
> +struct nvm_bm_type *nvm_find_bm_type(const char *name)
> +{
> +	struct nvm_bm_type *bt;
> +
> +	list_for_each_entry(bt, &nvm_bms, list)
> +		if (!strcmp(name, bt->name))
> +			return bt;
> +
> +	return NULL;
> +}
> +
> +int nvm_register_bm(struct nvm_bm_type *bt)
> +{
> +	int ret = 0;
> +
> +	down_write(&nvm_lock);
> +	if (nvm_find_bm_type(bt->name))
> +		ret = -EEXIST;
> +	else
> +		list_add(&bt->list, &nvm_bms);
> +	up_write(&nvm_lock);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(nvm_register_bm);
> +
> +void nvm_unregister_bm(struct nvm_bm_type *bt)
> +{
> +	if (!bt)
> +		return;
> +
> +	down_write(&nvm_lock);
> +	list_del(&bt->list);
> +	up_write(&nvm_lock);
> +}
> +EXPORT_SYMBOL(nvm_unregister_bm);
> +
> +struct nvm_dev *nvm_find_nvm_dev(const char *name)
> +{
> +	struct nvm_dev *dev;
> +
> +	list_for_each_entry(dev, &nvm_devices, devices)
> +		if (!strcmp(name, dev->name))
> +			return dev;
> +
> +	return NULL;
> +}
> +
> +struct nvm_block *nvm_get_blk(struct nvm_dev *dev, struct nvm_lun *lun,
> +							unsigned long flags)
> +{
> +	return dev->bm->get_blk(dev, lun, flags);
> +}
> +EXPORT_SYMBOL(nvm_get_blk);
> +
> +/* Assumes that all valid pages have already been moved on release to bm */
> +void nvm_put_blk(struct nvm_dev *dev, struct nvm_block *blk)
> +{
> +	return dev->bm->put_blk(dev, blk);
> +}
> +EXPORT_SYMBOL(nvm_put_blk);
> +
> +int nvm_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd)
> +{
> +	return dev->ops->submit_io(dev->q, rqd);
> +}
> +EXPORT_SYMBOL(nvm_submit_io);
> +
> +/* Send erase command to device */
> +int nvm_erase_blk(struct nvm_dev *dev, struct nvm_block *blk)
> +{
> +	return dev->bm->erase_blk(dev, blk);
> +}
> +EXPORT_SYMBOL(nvm_erase_blk);
> +
> +static void nvm_core_free(struct nvm_dev *dev)
> +{
> +	kfree(dev->identity.chnls);
> +	kfree(dev);
> +}
> +
> +static int nvm_core_init(struct nvm_dev *dev)
> +{
> +	dev->nr_luns = dev->identity.nchannels;
> +	dev->sector_size = EXPOSED_PAGE_SIZE;
> +	INIT_LIST_HEAD(&dev->online_targets);
> +
> +	return 0;
> +}
> +
> +static void nvm_free(struct nvm_dev *dev)
> +{
> +	if (!dev)
> +		return;
> +
> +	if (dev->bm)
> +		dev->bm->unregister_bm(dev);
> +
> +	nvm_core_free(dev);
> +}
> +
> +int nvm_validate_features(struct nvm_dev *dev)
> +{
> +	struct nvm_get_features gf;
> +	int ret;
> +
> +	ret = dev->ops->get_features(dev->q, &gf);
> +	if (ret)
> +		return ret;
> +
> +	dev->features = gf;
> +
> +	return 0;
> +}
> +
> +int nvm_validate_responsibility(struct nvm_dev *dev)
> +{
> +	if (!dev->ops->set_responsibility)
> +		return 0;
> +
> +	return dev->ops->set_responsibility(dev->q, 0);
> +}
> +
> +int nvm_init(struct nvm_dev *dev)
> +{
> +	struct nvm_bm_type *bt;
> +	int ret = 0;
> +
> +	if (!dev->q || !dev->ops)
> +		return -EINVAL;
> +
> +	if (dev->ops->identify(dev->q, &dev->identity)) {
> +		pr_err("nvm: device could not be identified\n");
> +		ret = -EINVAL;
> +		goto err;
> +	}
> +
> +	pr_debug("nvm dev: ver %u type %u chnls %u\n",
> +			dev->identity.ver_id,
> +			dev->identity.nvm_type,
> +			dev->identity.nchannels);
> +
> +	ret = nvm_validate_features(dev);
> +	if (ret) {
> +		pr_err("nvm: disk features are not supported.");
> +		goto err;
> +	}
> +
> +	ret = nvm_validate_responsibility(dev);
> +	if (ret) {
> +		pr_err("nvm: disk responsibilities are not supported.");
> +		goto err;
> +	}
> +
> +	ret = nvm_core_init(dev);
> +	if (ret) {
> +		pr_err("nvm: could not initialize core structures.\n");
> +		goto err;
> +	}
> +
> +	if (!dev->nr_luns) {
> +		pr_err("nvm: device did not expose any luns.\n");
> +		goto err;
> +	}
> +
> +	/* register with device with a supported BM */
> +	list_for_each_entry(bt, &nvm_bms, list) {
> +		ret = bt->register_bm(dev);
> +		if (ret < 0)
> +			goto err; /* initialization failed */
> +		if (ret > 0) {
> +			dev->bm = bt;
> +			break; /* successfully initialized */
> +		}
> +	}

Why just search it from head to tail? Can user specific it
in nvm_create_target()?
> +
> +	if (!ret) {
> +		pr_info("nvm: no compatible bm was found.\n");
> +		return 0;
> +	}

If we allow nvm_device registered with no bm, we would get
a NULL pointer reference problem in later using.

As mentioned above, why we have to choose bm for nvm in nvm_register?

Thanx
Yang
> +
> +	pr_info("nvm: registered %s with luns: %u blocks: %lu sector size: %d\n",
> +		dev->name, dev->nr_luns, dev->total_blocks, dev->sector_size);
> +
> +	return 0;
> +err:
> +	nvm_free(dev);
> +	pr_err("nvm: failed to initialize nvm\n");
> +	return ret;
> +}
> +
> +void nvm_exit(struct nvm_dev *dev)
> +{
> +	if (dev->ppalist_pool)
> +		dev->ops->destroy_ppa_pool(dev->ppalist_pool);
> +	nvm_free(dev);
> +
> +	pr_info("nvm: successfully unloaded\n");
> +}
> +
> +static const struct block_device_operations nvm_fops = {
> +	.owner		= THIS_MODULE,
> +};
> +
> +static int nvm_create_target(struct nvm_dev *dev, char *ttname, char *tname,
> +						int lun_begin, int lun_end)
> +{
> +	struct request_queue *tqueue;
> +	struct gendisk *tdisk;
> +	struct nvm_tgt_type *tt;
> +	struct nvm_target *t;
> +	void *targetdata;
> +
> +	tt = nvm_find_target_type(ttname);
> +	if (!tt) {
> +		pr_err("nvm: target type %s not found\n", ttname);
> +		return -EINVAL;
> +	}
> +
> +	down_write(&nvm_lock);
> +	list_for_each_entry(t, &dev->online_targets, list) {
> +		if (!strcmp(tname, t->disk->disk_name)) {
> +			pr_err("nvm: target name already exists.\n");
> +			up_write(&nvm_lock);
> +			return -EINVAL;
> +		}
> +	}
> +	up_write(&nvm_lock);
> +
> +	t = kmalloc(sizeof(struct nvm_target), GFP_KERNEL);
> +	if (!t)
> +		return -ENOMEM;
> +
> +	tqueue = blk_alloc_queue_node(GFP_KERNEL, dev->q->node);
> +	if (!tqueue)
> +		goto err_t;
> +	blk_queue_make_request(tqueue, tt->make_rq);
> +
> +	tdisk = alloc_disk(0);
> +	if (!tdisk)
> +		goto err_queue;
> +
> +	sprintf(tdisk->disk_name, "%s", tname);
> +	tdisk->flags = GENHD_FL_EXT_DEVT;
> +	tdisk->major = 0;
> +	tdisk->first_minor = 0;
> +	tdisk->fops = &nvm_fops;
> +	tdisk->queue = tqueue;
> +
> +	targetdata = tt->init(dev, tdisk, lun_begin, lun_end);
> +	if (IS_ERR(targetdata))
> +		goto err_init;
> +
> +	tdisk->private_data = targetdata;
> +	tqueue->queuedata = targetdata;
> +
> +	blk_queue_max_hw_sectors(tqueue, 8 * dev->ops->max_phys_sect);
> +
> +	set_capacity(tdisk, tt->capacity(targetdata));
> +	add_disk(tdisk);
> +
> +	t->type = tt;
> +	t->disk = tdisk;
> +
> +	down_write(&nvm_lock);
> +	list_add_tail(&t->list, &dev->online_targets);
> +	up_write(&nvm_lock);
> +
> +	return 0;
> +err_init:
> +	put_disk(tdisk);
> +err_queue:
> +	blk_cleanup_queue(tqueue);
> +err_t:
> +	kfree(t);
> +	return -ENOMEM;
> +}
> +
> +static void nvm_remove_target(struct nvm_target *t)
> +{
> +	struct nvm_tgt_type *tt = t->type;
> +	struct gendisk *tdisk = t->disk;
> +	struct request_queue *q = tdisk->queue;
> +
> +	lockdep_assert_held(&nvm_lock);
> +
> +	del_gendisk(tdisk);
> +	if (tt->exit)
> +		tt->exit(tdisk->private_data);
> +
> +	blk_cleanup_queue(q);
> +
> +	put_disk(tdisk);
> +
> +	list_del(&t->list);
> +	kfree(t);
> +}
> +
> +static int nvm_configure_show(const char *val)
> +{
> +	struct nvm_dev *dev;
> +	char opcode, devname[DISK_NAME_LEN];
> +	int ret;
> +
> +	ret = sscanf(val, "%c %s", &opcode, devname);
> +	if (ret != 2) {
> +		pr_err("nvm: invalid command. Use \"opcode devicename\".\n");
> +		return -EINVAL;
> +	}
> +
> +	dev = nvm_find_nvm_dev(devname);
> +	if (!dev) {
> +		pr_err("nvm: device not found\n");
> +		return -EINVAL;
> +	}
> +
> +	if (!dev->bm)
> +		return 0;
> +
> +	dev->bm->free_blocks_print(dev);
> +
> +	return 0;
> +}
> +
> +static int nvm_configure_del(const char *val)
> +{
> +	struct nvm_target *t = NULL;
> +	struct nvm_dev *dev;
> +	char opcode, tname[255];
> +	int ret;
> +
> +	ret = sscanf(val, "%c %s", &opcode, tname);
> +	if (ret != 2) {
> +		pr_err("nvm: invalid command. Use \"d targetname\".\n");
> +		return -EINVAL;
> +	}
> +
> +	down_write(&nvm_lock);
> +	list_for_each_entry(dev, &nvm_devices, devices)
> +		list_for_each_entry(t, &dev->online_targets, list) {
> +			if (!strcmp(tname, t->disk->disk_name)) {
> +				nvm_remove_target(t);
> +				ret = 0;
> +				break;
> +			}
> +		}
> +	up_write(&nvm_lock);
> +
> +	if (ret) {
> +		pr_err("nvm: target \"%s\" doesn't exist.\n", tname);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int nvm_configure_add(const char *val)
> +{
> +	struct nvm_dev *dev;
> +	char opcode, devname[DISK_NAME_LEN], tgtengine[255], tname[255];
> +	int lun_begin, lun_end, ret;
> +
> +	ret = sscanf(val, "%c %s %s %s %u:%u", &opcode, devname, tgtengine,
> +						tname, &lun_begin, &lun_end);
> +	if (ret != 6) {
> +		pr_err("nvm: invalid command. Use \"opcode device name tgtengine lun_begin:lun_end\".\n");
> +		return -EINVAL;
> +	}
> +
> +	dev = nvm_find_nvm_dev(devname);
> +	if (!dev) {
> +		pr_err("nvm: device not found\n");
> +		return -EINVAL;
> +	}
> +
> +	if (lun_begin > lun_end || lun_end > dev->nr_luns) {
> +		pr_err("nvm: lun out of bound (%u:%u > %u)\n",
> +					lun_begin, lun_end, dev->nr_luns);
> +		return -EINVAL;
> +	}
> +
> +	return nvm_create_target(dev, tname, tgtengine, lun_begin, lun_end);
> +}
> +
> +/* Exposes administrative interface through /sys/module/lnvm/configure_by_str */
> +static int nvm_configure_by_str_event(const char *val,
> +					const struct kernel_param *kp)
> +{
> +	char opcode;
> +	int ret;
> +
> +	ret = sscanf(val, "%c", &opcode);
> +	if (ret != 1) {
> +		pr_err("nvm: configure must be in the format of \"opcode ...\"\n");
> +		return -EINVAL;
> +	}
> +
> +	switch (opcode) {
> +	case 'a':
> +		return nvm_configure_add(val);
> +	case 'd':
> +		return nvm_configure_del(val);
> +	case 's':
> +		return nvm_configure_show(val);
> +	default:
> +		pr_err("nvm: invalid opcode.\n");
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int nvm_configure_get(char *buf, const struct kernel_param *kp)
> +{
> +	int sz = 0;
> +	char *buf_start = buf;
> +	struct nvm_dev *dev;
> +
> +	buf += sprintf(buf, "available devices:\n");
> +	down_write(&nvm_lock);
> +	list_for_each_entry(dev, &nvm_devices, devices) {
> +		if (sz > 4095 - DISK_NAME_LEN)
> +			break;
> +		buf += sprintf(buf, " %s\n", dev->name);
> +	}
> +	up_write(&nvm_lock);
> +
> +	return buf - buf_start - 1;
> +}
> +
> +static const struct kernel_param_ops nvm_configure_by_str_event_param_ops = {
> +	.set	= nvm_configure_by_str_event,
> +	.get	= nvm_configure_get,
> +};
> +
> +#undef MODULE_PARAM_PREFIX
> +#define MODULE_PARAM_PREFIX	"lnvm."
> +
> +module_param_cb(configure_debug, &nvm_configure_by_str_event_param_ops, NULL,
> +									0644);
> +
> +int nvm_register(struct request_queue *q, char *disk_name,
> +							struct nvm_dev_ops *ops)
> +{
> +	struct nvm_dev *dev;
> +	int ret;
> +
> +	if (!ops->identify || !ops->get_features)
> +		return -EINVAL;
> +
> +	dev = kzalloc(sizeof(struct nvm_dev), GFP_KERNEL);
> +	if (!dev)
> +		return -ENOMEM;
> +
> +	dev->q = q;
> +	dev->ops = ops;
> +	strncpy(dev->name, disk_name, DISK_NAME_LEN);
> +
> +	ret = nvm_init(dev);
> +	if (ret)
> +		goto err_init;
> +
> +	down_write(&nvm_lock);
> +	list_add(&dev->devices, &nvm_devices);
> +	up_write(&nvm_lock);
> +
> +	if (dev->ops->max_phys_sect > 256) {
> +		pr_info("nvm: maximum number of sectors supported in target is 255. max_phys_sect set to 255\n");
> +		dev->ops->max_phys_sect = 255;
> +	}
> +
> +	if (dev->ops->max_phys_sect > 1) {
> +		dev->ppalist_pool = dev->ops->create_ppa_pool(dev->q);
> +		if (!dev->ppalist_pool) {
> +			pr_err("nvm: could not create ppa pool\n");
> +			return -ENOMEM;
> +		}
> +	}
> +
> +	return 0;
> +err_init:
> +	kfree(dev);
> +	return ret;
> +}
> +EXPORT_SYMBOL(nvm_register);
> +
> +void nvm_unregister(char *disk_name)
> +{
> +	struct nvm_dev *dev = nvm_find_nvm_dev(disk_name);
> +
> +	if (!dev) {
> +		pr_err("nvm: could not find device %s on unregister\n",
> +								disk_name);
> +		return;
> +	}
> +
> +	nvm_exit(dev);
> +
> +	down_write(&nvm_lock);
> +	list_del(&dev->devices);
> +	up_write(&nvm_lock);
> +}
> +EXPORT_SYMBOL(nvm_unregister);
> diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
> new file mode 100644
> index 0000000..9654354
> --- /dev/null
> +++ b/include/linux/lightnvm.h
> @@ -0,0 +1,335 @@
> +#ifndef NVM_H
> +#define NVM_H
> +
> +enum {
> +	NVM_IO_OK = 0,
> +	NVM_IO_REQUEUE = 1,
> +	NVM_IO_DONE = 2,
> +	NVM_IO_ERR = 3,
> +
> +	NVM_IOTYPE_NONE = 0,
> +	NVM_IOTYPE_GC = 1,
> +};
> +
> +#ifdef CONFIG_NVM
> +
> +#include <linux/blkdev.h>
> +#include <linux/types.h>
> +#include <linux/file.h>
> +#include <linux/dmapool.h>
> +
> +enum {
> +	/* HW Responsibilities */
> +	NVM_RSP_L2P	= 1 << 0,
> +	NVM_RSP_GC	= 1 << 1,
> +	NVM_RSP_ECC	= 1 << 2,
> +
> +	/* Physical NVM Type */
> +	NVM_NVMT_BLK	= 0,
> +	NVM_NVMT_BYTE	= 1,
> +
> +	/* Internal IO Scheduling algorithm */
> +	NVM_IOSCHED_CHANNEL	= 0,
> +	NVM_IOSCHED_CHIP	= 1,
> +
> +	/* Status codes */
> +	NVM_SUCCESS		= 0,
> +	NVM_RSP_NOT_CHANGEABLE	= 1,
> +};
> +
> +struct nvm_id_chnl {
> +	u64	laddr_begin;
> +	u64	laddr_end;
> +	u32	oob_size;
> +	u32	queue_size;
> +	u32	gran_read;
> +	u32	gran_write;
> +	u32	gran_erase;
> +	u32	t_r;
> +	u32	t_sqr;
> +	u32	t_w;
> +	u32	t_sqw;
> +	u32	t_e;
> +	u16	chnl_parallelism;
> +	u8	io_sched;
> +	u8	res[133];
> +};
> +
> +struct nvm_id {
> +	u8	ver_id;
> +	u8	nvm_type;
> +	u16	nchannels;
> +	struct nvm_id_chnl *chnls;
> +};
> +
> +struct nvm_get_features {
> +	u64	rsp;
> +	u64	ext;
> +};
> +
> +struct nvm_target {
> +	struct list_head list;
> +	struct nvm_tgt_type *type;
> +	struct gendisk *disk;
> +};
> +
> +struct nvm_tgt_instance {
> +	struct nvm_tgt_type *tt;
> +};
> +
> +struct nvm_rq {
> +	struct nvm_tgt_instance *ins;
> +	struct bio *bio;
> +	union {
> +		sector_t ppa;
> +		sector_t *ppa_list;
> +	};
> +	/*DMA handler to be used by underlying devices supporting DMA*/
> +	dma_addr_t dma_ppa_list;
> +	uint8_t npages;
> +};
> +
> +static inline struct nvm_rq *nvm_rq_from_pdu(void *pdu)
> +{
> +	return pdu - sizeof(struct nvm_rq);
> +}
> +
> +static inline void *nvm_rq_to_pdu(struct nvm_rq *rqdata)
> +{
> +	return rqdata + 1;
> +}
> +
> +struct nvm_block;
> +
> +typedef int (nvm_l2p_update_fn)(u64, u64, u64 *, void *);
> +typedef int (nvm_bb_update_fn)(u32, void *, unsigned int, void *);
> +typedef int (nvm_id_fn)(struct request_queue *, struct nvm_id *);
> +typedef int (nvm_get_features_fn)(struct request_queue *,
> +						struct nvm_get_features *);
> +typedef int (nvm_set_rsp_fn)(struct request_queue *, u64);
> +typedef int (nvm_get_l2p_tbl_fn)(struct request_queue *, u64, u64,
> +				nvm_l2p_update_fn *, void *);
> +typedef int (nvm_op_bb_tbl_fn)(struct request_queue *, int, unsigned int,
> +				nvm_bb_update_fn *, void *);
> +typedef int (nvm_submit_io_fn)(struct request_queue *, struct nvm_rq *);
> +typedef int (nvm_erase_blk_fn)(struct request_queue *, sector_t);
> +typedef void *(nvm_create_ppapool_fn)(struct request_queue *);
> +typedef void (nvm_destroy_ppapool_fn)(void *);
> +typedef void *(nvm_alloc_ppalist_fn)(struct request_queue *, void *, gfp_t,
> +								dma_addr_t*);
> +typedef void (nvm_free_ppalist_fn)(void *, void*, dma_addr_t);
> +
> +struct nvm_dev_ops {
> +	nvm_id_fn		*identify;
> +	nvm_get_features_fn	*get_features;
> +	nvm_set_rsp_fn		*set_responsibility;
> +	nvm_get_l2p_tbl_fn	*get_l2p_tbl;
> +	nvm_op_bb_tbl_fn	*set_bb_tbl;
> +	nvm_op_bb_tbl_fn	*get_bb_tbl;
> +
> +	nvm_submit_io_fn	*submit_io;
> +	nvm_erase_blk_fn	*erase_block;
> +
> +	nvm_create_ppapool_fn	*create_ppa_pool;
> +	nvm_destroy_ppapool_fn	*destroy_ppa_pool;
> +	nvm_alloc_ppalist_fn	*alloc_ppalist;
> +	nvm_free_ppalist_fn	*free_ppalist;
> +
> +	uint8_t			max_phys_sect;
> +};
> +
> +struct nvm_lun {
> +	int id;
> +
> +	int nr_pages_per_blk;
> +	unsigned int nr_blocks;		/* end_block - start_block. */
> +	unsigned int nr_free_blocks;	/* Number of unused blocks */
> +
> +	struct nvm_block *blocks;
> +
> +	spinlock_t lock;
> +};
> +
> +struct nvm_block {
> +	struct list_head list;
> +	struct nvm_lun *lun;
> +	unsigned long long id;
> +
> +	void *priv;
> +	int type;
> +};
> +
> +struct nvm_dev {
> +	struct nvm_dev_ops *ops;
> +
> +	struct list_head devices;
> +	struct list_head online_targets;
> +
> +	/* Block manager */
> +	struct nvm_bm_type *bm;
> +	void *bmp;
> +
> +	/* Target information */
> +	int nr_luns;
> +
> +	/* Calculated/Cached values. These do not reflect the actual usable
> +	 * blocks at run-time. */
> +	unsigned long total_pages;
> +	unsigned long total_blocks;
> +	unsigned max_pages_per_blk;
> +
> +	uint32_t sector_size;
> +
> +	void *ppalist_pool;
> +
> +	/* Identity */
> +	struct nvm_id identity;
> +	struct nvm_get_features features;
> +
> +	/* Backend device */
> +	struct request_queue *q;
> +	char name[DISK_NAME_LEN];
> +};
> +
> +typedef void (nvm_tgt_make_rq_fn)(struct request_queue *, struct bio *);
> +typedef sector_t (nvm_tgt_capacity_fn)(void *);
> +typedef void (nvm_tgt_end_io_fn)(struct nvm_rq *, int);
> +typedef void *(nvm_tgt_init_fn)(struct nvm_dev *, struct gendisk *, int, int);
> +typedef void (nvm_tgt_exit_fn)(void *);
> +
> +struct nvm_tgt_type {
> +	const char *name;
> +	unsigned int version[3];
> +
> +	/* target entry points */
> +	nvm_tgt_make_rq_fn *make_rq;
> +	nvm_tgt_capacity_fn *capacity;
> +	nvm_tgt_end_io_fn *end_io;
> +
> +	/* module-specific init/teardown */
> +	nvm_tgt_init_fn *init;
> +	nvm_tgt_exit_fn *exit;
> +
> +	/* For internal use */
> +	struct list_head list;
> +};
> +
> +extern int nvm_register_target(struct nvm_tgt_type *);
> +extern void nvm_unregister_target(struct nvm_tgt_type *);
> +
> +extern void *nvm_alloc_ppalist(struct nvm_dev *, gfp_t, dma_addr_t *);
> +extern void nvm_free_ppalist(struct nvm_dev *, void *, dma_addr_t);
> +
> +typedef int (nvm_bm_register_fn)(struct nvm_dev *);
> +typedef void (nvm_bm_unregister_fn)(struct nvm_dev *);
> +typedef struct nvm_block *(nvm_bm_get_blk_fn)(struct nvm_dev *,
> +					      struct nvm_lun *, unsigned long);
> +typedef void (nvm_bm_put_blk_fn)(struct nvm_dev *, struct nvm_block *);
> +typedef int (nvm_bm_open_blk_fn)(struct nvm_dev *, struct nvm_block *);
> +typedef int (nvm_bm_close_blk_fn)(struct nvm_dev *, struct nvm_block *);
> +typedef void (nvm_bm_flush_blk_fn)(struct nvm_dev *, struct nvm_block *);
> +typedef int (nvm_bm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *);
> +typedef void (nvm_bm_end_io_fn)(struct nvm_rq *, int);
> +typedef int (nvm_bm_erase_blk_fn)(struct nvm_dev *, struct nvm_block *);
> +typedef int (nvm_bm_register_prog_err_fn)(struct nvm_dev *,
> +	     void (prog_err_fn)(struct nvm_dev *, struct nvm_block *));
> +typedef int (nvm_bm_save_state_fn)(struct file *);
> +typedef int (nvm_bm_restore_state_fn)(struct file *);
> +typedef struct nvm_lun *(nvm_bm_get_luns_fn)(struct nvm_dev *, int, int);
> +typedef void (nvm_bm_free_blocks_print_fn)(struct nvm_dev *);
> +
> +struct nvm_bm_type {
> +	const char *name;
> +	unsigned int version[3];
> +
> +	nvm_bm_register_fn *register_bm;
> +	nvm_bm_unregister_fn *unregister_bm;
> +
> +	/* Block administration callbacks */
> +	nvm_bm_get_blk_fn *get_blk;
> +	nvm_bm_put_blk_fn *put_blk;
> +	nvm_bm_open_blk_fn *open_blk;
> +	nvm_bm_close_blk_fn *close_blk;
> +	nvm_bm_flush_blk_fn *flush_blk;
> +
> +	nvm_bm_submit_io_fn *submit_io;
> +	nvm_bm_end_io_fn *end_io;
> +	nvm_bm_erase_blk_fn *erase_blk;
> +
> +	/* State management for debugging purposes */
> +	nvm_bm_save_state_fn *save_state;
> +	nvm_bm_restore_state_fn *restore_state;
> +
> +	/* Configuration management */
> +	nvm_bm_get_luns_fn *get_luns;
> +
> +	/* Statistics */
> +	nvm_bm_free_blocks_print_fn *free_blocks_print;
> +	struct list_head list;
> +};
> +
> +extern int nvm_register_bm(struct nvm_bm_type *);
> +extern void nvm_unregister_bm(struct nvm_bm_type *);
> +
> +extern struct nvm_block *nvm_get_blk(struct nvm_dev *, struct nvm_lun *,
> +								unsigned long);
> +extern void nvm_put_blk(struct nvm_dev *, struct nvm_block *);
> +extern int nvm_erase_blk(struct nvm_dev *, struct nvm_block *);
> +
> +extern int nvm_register(struct request_queue *, char *,
> +						struct nvm_dev_ops *);
> +extern void nvm_unregister(char *);
> +
> +extern int nvm_submit_io(struct nvm_dev *, struct nvm_rq *);
> +
> +/* We currently assume that we the lightnvm device is accepting data in 512
> + * bytes chunks. This should be set to the smallest command size available for a
> + * given device.
> + */
> +#define NVM_SECTOR (512)
> +#define EXPOSED_PAGE_SIZE (4096)
> +
> +#define NR_PHY_IN_LOG (EXPOSED_PAGE_SIZE / NVM_SECTOR)
> +
> +#define NVM_MSG_PREFIX "nvm"
> +#define ADDR_EMPTY (~0ULL)
> +
> +static inline unsigned long nvm_get_rq_flags(struct request *rq)
> +{
> +	return (unsigned long)rq->cmd;
> +}
> +
> +#else /* CONFIG_NVM */
> +
> +struct nvm_dev_ops;
> +struct nvm_dev;
> +struct nvm_lun;
> +struct nvm_block;
> +struct nvm_rq {
> +};
> +struct nvm_tgt_type;
> +struct nvm_tgt_instance;
> +
> +static inline struct nvm_tgt_type *nvm_find_target_type(const char *c)
> +{
> +	return NULL;
> +}
> +static inline int nvm_register(struct request_queue *q, char *disk_name,
> +							struct nvm_dev_ops *ops)
> +{
> +	return -EINVAL;
> +}
> +static inline void nvm_unregister(char *disk_name) {}
> +static inline struct nvm_block *nvm_get_blk(struct nvm_dev *dev,
> +				struct nvm_lun *lun, unsigned long flags)
> +{
> +	return NULL;
> +}
> +static inline void nvm_put_blk(struct nvm_dev *dev, struct nvm_block *blk) {}
> +static inline int nvm_erase_blk(struct nvm_dev *dev, struct nvm_block *blk)
> +{
> +	return -EINVAL;
> +}
> +
> +#endif /* CONFIG_NVM */
> +#endif /* LIGHTNVM.H */
>

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH v7 1/5] lightnvm: Support for Open-Channel SSDs
@ 2015-09-02  3:50     ` Dongsheng Yang
  0 siblings, 0 replies; 33+ messages in thread
From: Dongsheng Yang @ 2015-09-02  3:50 UTC (permalink / raw)


On 08/07/2015 10:29 PM, Matias Bj?rling wrote:
> Open-channel SSDs are devices that share responsibilities with the host
> in order to implement and maintain features that typical SSDs keep
> strictly in firmware. These include (i) the Flash Translation Layer
> (FTL), (ii) bad block management, and (iii) hardware units such as the
> flash controller, the interface controller, and large amounts of flash
> chips. In this way, Open-channels SSDs exposes direct access to their
> physical flash storage, while keeping a subset of the internal features
> of SSDs.
>
> LightNVM is a specification that gives support to Open-channel SSDs
> LightNVM allows the host to manage data placement, garbage collection,
> and parallelism. Device specific responsibilities such as bad block
> management, FTL extensions to support atomic IOs, or metadata
> persistence are still handled by the device.
>
> The implementation of LightNVM consists of two parts: core and
> (multiple) targets. The core implements functionality shared across
> targets. This is initialization, teardown and statistics. The targets
> implement the interface that exposes physical flash to user-space
> applications. Examples of such targets include key-value store,
> object-store, as well as traditional block devices, which can be
> application-specific.
>
> Contributions in this patch from:
>
>    Javier Gonzalez <jg at lightnvm.io>
>    Jesper Madsen <jmad at itu.dk>
>
> Signed-off-by: Matias Bj?rling <mb at lightnvm.io>
> ---
>   MAINTAINERS               |   8 +
>   drivers/Kconfig           |   2 +
>   drivers/Makefile          |   5 +
>   drivers/lightnvm/Kconfig  |  16 ++
>   drivers/lightnvm/Makefile |   5 +
>   drivers/lightnvm/core.c   | 590 ++++++++++++++++++++++++++++++++++++++++++++++
>   include/linux/lightnvm.h  | 335 ++++++++++++++++++++++++++
>   7 files changed, 961 insertions(+)
>   create mode 100644 drivers/lightnvm/Kconfig
>   create mode 100644 drivers/lightnvm/Makefile
>   create mode 100644 drivers/lightnvm/core.c
>   create mode 100644 include/linux/lightnvm.h
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 2d3d55c..d149104 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -6162,6 +6162,14 @@ S:	Supported
>   F:	drivers/nvdimm/pmem.c
>   F:	include/linux/pmem.h
>
> +LIGHTNVM PLATFORM SUPPORT
> +M:	Matias Bjorling <mb at lightnvm.io>
> +W:	http://github/OpenChannelSSD
> +S:	Maintained
> +F:	drivers/lightnvm/
> +F:	include/linux/lightnvm.h
> +F:	include/uapi/linux/lightnvm.h
> +
>   LINUX FOR IBM pSERIES (RS/6000)
>   M:	Paul Mackerras <paulus at au.ibm.com>
>   W:	http://www.ibm.com/linux/ltc/projects/ppc
> diff --git a/drivers/Kconfig b/drivers/Kconfig
> index 6e973b8..3992902 100644
> --- a/drivers/Kconfig
> +++ b/drivers/Kconfig
> @@ -42,6 +42,8 @@ source "drivers/net/Kconfig"
>
>   source "drivers/isdn/Kconfig"
>
> +source "drivers/lightnvm/Kconfig"
> +
>   # input before char - char/joystick depends on it. As does USB.
>
>   source "drivers/input/Kconfig"
> diff --git a/drivers/Makefile b/drivers/Makefile
> index b64b49f..75978ab 100644
> --- a/drivers/Makefile
> +++ b/drivers/Makefile
> @@ -63,6 +63,10 @@ obj-$(CONFIG_FB_I810)           += video/fbdev/i810/
>   obj-$(CONFIG_FB_INTEL)          += video/fbdev/intelfb/
>
>   obj-$(CONFIG_PARPORT)		+= parport/
> +
> +# lightnvm/ comes before block to initialize bm before usage
> +obj-$(CONFIG_NVM)		+= lightnvm/
> +
>   obj-y				+= base/ block/ misc/ mfd/ nfc/
>   obj-$(CONFIG_LIBNVDIMM)		+= nvdimm/
>   obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf/
> @@ -165,3 +169,4 @@ obj-$(CONFIG_RAS)		+= ras/
>   obj-$(CONFIG_THUNDERBOLT)	+= thunderbolt/
>   obj-$(CONFIG_CORESIGHT)		+= hwtracing/coresight/
>   obj-$(CONFIG_ANDROID)		+= android/
> +
> diff --git a/drivers/lightnvm/Kconfig b/drivers/lightnvm/Kconfig
> new file mode 100644
> index 0000000..1f8412c
> --- /dev/null
> +++ b/drivers/lightnvm/Kconfig
> @@ -0,0 +1,16 @@
> +#
> +# Open-Channel SSD NVM configuration
> +#
> +
> +menuconfig NVM
> +	bool "Open-Channel SSD target support"
> +	depends on BLOCK
> +	help
> +	  Say Y here to get to enable Open-channel SSDs.
> +
> +	  Open-Channel SSDs implement a set of extension to SSDs, that
> +	  exposes direct access to the underlying non-volatile memory.
> +
> +	  If you say N, all options in this submenu will be skipped and disabled
> +	  only do this if you know what you are doing.
> +
> diff --git a/drivers/lightnvm/Makefile b/drivers/lightnvm/Makefile
> new file mode 100644
> index 0000000..38185e9
> --- /dev/null
> +++ b/drivers/lightnvm/Makefile
> @@ -0,0 +1,5 @@
> +#
> +# Makefile for Open-Channel SSDs.
> +#
> +
> +obj-$(CONFIG_NVM)		:= core.o
> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
> new file mode 100644
> index 0000000..6499922
> --- /dev/null
> +++ b/drivers/lightnvm/core.c
> @@ -0,0 +1,590 @@
> +/*
> + * Copyright (C) 2015 IT University of Copenhagen
> + * Initial release: Matias Bjorling <mabj at itu.dk>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License version
> + * 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful, but
> + * WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; see the file COPYING.  If not, write to
> + * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139,
> + * USA.
> + *
> + */
> +
> +#include <linux/blkdev.h>
> +#include <linux/blk-mq.h>
> +#include <linux/list.h>
> +#include <linux/types.h>
> +#include <linux/sem.h>
> +#include <linux/bitmap.h>
> +#include <linux/module.h>
> +
> +#include <linux/lightnvm.h>
> +
> +static LIST_HEAD(nvm_targets);
> +static LIST_HEAD(nvm_bms);
> +static LIST_HEAD(nvm_devices);
> +static DECLARE_RWSEM(nvm_lock);
> +
> +struct nvm_tgt_type *nvm_find_target_type(const char *name)
> +{
> +	struct nvm_tgt_type *tt;
> +
> +	list_for_each_entry(tt, &nvm_targets, list)
> +		if (!strcmp(name, tt->name))
> +			return tt;
> +
> +	return NULL;
> +}
> +
> +int nvm_register_target(struct nvm_tgt_type *tt)
> +{
> +	int ret = 0;
> +
> +	down_write(&nvm_lock);
> +	if (nvm_find_target_type(tt->name))
> +		ret = -EEXIST;
> +	else
> +		list_add(&tt->list, &nvm_targets);
> +	up_write(&nvm_lock);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(nvm_register_target);
> +
> +void nvm_unregister_target(struct nvm_tgt_type *tt)
> +{
> +	if (!tt)
> +		return;
> +
> +	down_write(&nvm_lock);
> +	list_del(&tt->list);
> +	up_write(&nvm_lock);
> +}
> +EXPORT_SYMBOL(nvm_unregister_target);
> +
> +void *nvm_alloc_ppalist(struct nvm_dev *dev, gfp_t mem_flags,
> +							dma_addr_t *dma_handler)
> +{
> +	return dev->ops->alloc_ppalist(dev->q, dev->ppalist_pool, mem_flags,
> +								dma_handler);
> +}
> +EXPORT_SYMBOL(nvm_alloc_ppalist);
> +
> +void nvm_free_ppalist(struct nvm_dev *dev, void *ppa_list,
> +							dma_addr_t dma_handler)
> +{
> +	dev->ops->free_ppalist(dev->ppalist_pool, ppa_list, dma_handler);
> +}
> +EXPORT_SYMBOL(nvm_free_ppalist);
> +
> +struct nvm_bm_type *nvm_find_bm_type(const char *name)
> +{
> +	struct nvm_bm_type *bt;
> +
> +	list_for_each_entry(bt, &nvm_bms, list)
> +		if (!strcmp(name, bt->name))
> +			return bt;
> +
> +	return NULL;
> +}
> +
> +int nvm_register_bm(struct nvm_bm_type *bt)
> +{
> +	int ret = 0;
> +
> +	down_write(&nvm_lock);
> +	if (nvm_find_bm_type(bt->name))
> +		ret = -EEXIST;
> +	else
> +		list_add(&bt->list, &nvm_bms);
> +	up_write(&nvm_lock);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(nvm_register_bm);
> +
> +void nvm_unregister_bm(struct nvm_bm_type *bt)
> +{
> +	if (!bt)
> +		return;
> +
> +	down_write(&nvm_lock);
> +	list_del(&bt->list);
> +	up_write(&nvm_lock);
> +}
> +EXPORT_SYMBOL(nvm_unregister_bm);
> +
> +struct nvm_dev *nvm_find_nvm_dev(const char *name)
> +{
> +	struct nvm_dev *dev;
> +
> +	list_for_each_entry(dev, &nvm_devices, devices)
> +		if (!strcmp(name, dev->name))
> +			return dev;
> +
> +	return NULL;
> +}
> +
> +struct nvm_block *nvm_get_blk(struct nvm_dev *dev, struct nvm_lun *lun,
> +							unsigned long flags)
> +{
> +	return dev->bm->get_blk(dev, lun, flags);
> +}
> +EXPORT_SYMBOL(nvm_get_blk);
> +
> +/* Assumes that all valid pages have already been moved on release to bm */
> +void nvm_put_blk(struct nvm_dev *dev, struct nvm_block *blk)
> +{
> +	return dev->bm->put_blk(dev, blk);
> +}
> +EXPORT_SYMBOL(nvm_put_blk);
> +
> +int nvm_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd)
> +{
> +	return dev->ops->submit_io(dev->q, rqd);
> +}
> +EXPORT_SYMBOL(nvm_submit_io);
> +
> +/* Send erase command to device */
> +int nvm_erase_blk(struct nvm_dev *dev, struct nvm_block *blk)
> +{
> +	return dev->bm->erase_blk(dev, blk);
> +}
> +EXPORT_SYMBOL(nvm_erase_blk);
> +
> +static void nvm_core_free(struct nvm_dev *dev)
> +{
> +	kfree(dev->identity.chnls);
> +	kfree(dev);
> +}
> +
> +static int nvm_core_init(struct nvm_dev *dev)
> +{
> +	dev->nr_luns = dev->identity.nchannels;
> +	dev->sector_size = EXPOSED_PAGE_SIZE;
> +	INIT_LIST_HEAD(&dev->online_targets);
> +
> +	return 0;
> +}
> +
> +static void nvm_free(struct nvm_dev *dev)
> +{
> +	if (!dev)
> +		return;
> +
> +	if (dev->bm)
> +		dev->bm->unregister_bm(dev);
> +
> +	nvm_core_free(dev);
> +}
> +
> +int nvm_validate_features(struct nvm_dev *dev)
> +{
> +	struct nvm_get_features gf;
> +	int ret;
> +
> +	ret = dev->ops->get_features(dev->q, &gf);
> +	if (ret)
> +		return ret;
> +
> +	dev->features = gf;
> +
> +	return 0;
> +}
> +
> +int nvm_validate_responsibility(struct nvm_dev *dev)
> +{
> +	if (!dev->ops->set_responsibility)
> +		return 0;
> +
> +	return dev->ops->set_responsibility(dev->q, 0);
> +}
> +
> +int nvm_init(struct nvm_dev *dev)
> +{
> +	struct nvm_bm_type *bt;
> +	int ret = 0;
> +
> +	if (!dev->q || !dev->ops)
> +		return -EINVAL;
> +
> +	if (dev->ops->identify(dev->q, &dev->identity)) {
> +		pr_err("nvm: device could not be identified\n");
> +		ret = -EINVAL;
> +		goto err;
> +	}
> +
> +	pr_debug("nvm dev: ver %u type %u chnls %u\n",
> +			dev->identity.ver_id,
> +			dev->identity.nvm_type,
> +			dev->identity.nchannels);
> +
> +	ret = nvm_validate_features(dev);
> +	if (ret) {
> +		pr_err("nvm: disk features are not supported.");
> +		goto err;
> +	}
> +
> +	ret = nvm_validate_responsibility(dev);
> +	if (ret) {
> +		pr_err("nvm: disk responsibilities are not supported.");
> +		goto err;
> +	}
> +
> +	ret = nvm_core_init(dev);
> +	if (ret) {
> +		pr_err("nvm: could not initialize core structures.\n");
> +		goto err;
> +	}
> +
> +	if (!dev->nr_luns) {
> +		pr_err("nvm: device did not expose any luns.\n");
> +		goto err;
> +	}
> +
> +	/* register with device with a supported BM */
> +	list_for_each_entry(bt, &nvm_bms, list) {
> +		ret = bt->register_bm(dev);
> +		if (ret < 0)
> +			goto err; /* initialization failed */
> +		if (ret > 0) {
> +			dev->bm = bt;
> +			break; /* successfully initialized */
> +		}
> +	}

Why just search it from head to tail? Can user specific it
in nvm_create_target()?
> +
> +	if (!ret) {
> +		pr_info("nvm: no compatible bm was found.\n");
> +		return 0;
> +	}

If we allow nvm_device registered with no bm, we would get
a NULL pointer reference problem in later using.

As mentioned above, why we have to choose bm for nvm in nvm_register?

Thanx
Yang
> +
> +	pr_info("nvm: registered %s with luns: %u blocks: %lu sector size: %d\n",
> +		dev->name, dev->nr_luns, dev->total_blocks, dev->sector_size);
> +
> +	return 0;
> +err:
> +	nvm_free(dev);
> +	pr_err("nvm: failed to initialize nvm\n");
> +	return ret;
> +}
> +
> +void nvm_exit(struct nvm_dev *dev)
> +{
> +	if (dev->ppalist_pool)
> +		dev->ops->destroy_ppa_pool(dev->ppalist_pool);
> +	nvm_free(dev);
> +
> +	pr_info("nvm: successfully unloaded\n");
> +}
> +
> +static const struct block_device_operations nvm_fops = {
> +	.owner		= THIS_MODULE,
> +};
> +
> +static int nvm_create_target(struct nvm_dev *dev, char *ttname, char *tname,
> +						int lun_begin, int lun_end)
> +{
> +	struct request_queue *tqueue;
> +	struct gendisk *tdisk;
> +	struct nvm_tgt_type *tt;
> +	struct nvm_target *t;
> +	void *targetdata;
> +
> +	tt = nvm_find_target_type(ttname);
> +	if (!tt) {
> +		pr_err("nvm: target type %s not found\n", ttname);
> +		return -EINVAL;
> +	}
> +
> +	down_write(&nvm_lock);
> +	list_for_each_entry(t, &dev->online_targets, list) {
> +		if (!strcmp(tname, t->disk->disk_name)) {
> +			pr_err("nvm: target name already exists.\n");
> +			up_write(&nvm_lock);
> +			return -EINVAL;
> +		}
> +	}
> +	up_write(&nvm_lock);
> +
> +	t = kmalloc(sizeof(struct nvm_target), GFP_KERNEL);
> +	if (!t)
> +		return -ENOMEM;
> +
> +	tqueue = blk_alloc_queue_node(GFP_KERNEL, dev->q->node);
> +	if (!tqueue)
> +		goto err_t;
> +	blk_queue_make_request(tqueue, tt->make_rq);
> +
> +	tdisk = alloc_disk(0);
> +	if (!tdisk)
> +		goto err_queue;
> +
> +	sprintf(tdisk->disk_name, "%s", tname);
> +	tdisk->flags = GENHD_FL_EXT_DEVT;
> +	tdisk->major = 0;
> +	tdisk->first_minor = 0;
> +	tdisk->fops = &nvm_fops;
> +	tdisk->queue = tqueue;
> +
> +	targetdata = tt->init(dev, tdisk, lun_begin, lun_end);
> +	if (IS_ERR(targetdata))
> +		goto err_init;
> +
> +	tdisk->private_data = targetdata;
> +	tqueue->queuedata = targetdata;
> +
> +	blk_queue_max_hw_sectors(tqueue, 8 * dev->ops->max_phys_sect);
> +
> +	set_capacity(tdisk, tt->capacity(targetdata));
> +	add_disk(tdisk);
> +
> +	t->type = tt;
> +	t->disk = tdisk;
> +
> +	down_write(&nvm_lock);
> +	list_add_tail(&t->list, &dev->online_targets);
> +	up_write(&nvm_lock);
> +
> +	return 0;
> +err_init:
> +	put_disk(tdisk);
> +err_queue:
> +	blk_cleanup_queue(tqueue);
> +err_t:
> +	kfree(t);
> +	return -ENOMEM;
> +}
> +
> +static void nvm_remove_target(struct nvm_target *t)
> +{
> +	struct nvm_tgt_type *tt = t->type;
> +	struct gendisk *tdisk = t->disk;
> +	struct request_queue *q = tdisk->queue;
> +
> +	lockdep_assert_held(&nvm_lock);
> +
> +	del_gendisk(tdisk);
> +	if (tt->exit)
> +		tt->exit(tdisk->private_data);
> +
> +	blk_cleanup_queue(q);
> +
> +	put_disk(tdisk);
> +
> +	list_del(&t->list);
> +	kfree(t);
> +}
> +
> +static int nvm_configure_show(const char *val)
> +{
> +	struct nvm_dev *dev;
> +	char opcode, devname[DISK_NAME_LEN];
> +	int ret;
> +
> +	ret = sscanf(val, "%c %s", &opcode, devname);
> +	if (ret != 2) {
> +		pr_err("nvm: invalid command. Use \"opcode devicename\".\n");
> +		return -EINVAL;
> +	}
> +
> +	dev = nvm_find_nvm_dev(devname);
> +	if (!dev) {
> +		pr_err("nvm: device not found\n");
> +		return -EINVAL;
> +	}
> +
> +	if (!dev->bm)
> +		return 0;
> +
> +	dev->bm->free_blocks_print(dev);
> +
> +	return 0;
> +}
> +
> +static int nvm_configure_del(const char *val)
> +{
> +	struct nvm_target *t = NULL;
> +	struct nvm_dev *dev;
> +	char opcode, tname[255];
> +	int ret;
> +
> +	ret = sscanf(val, "%c %s", &opcode, tname);
> +	if (ret != 2) {
> +		pr_err("nvm: invalid command. Use \"d targetname\".\n");
> +		return -EINVAL;
> +	}
> +
> +	down_write(&nvm_lock);
> +	list_for_each_entry(dev, &nvm_devices, devices)
> +		list_for_each_entry(t, &dev->online_targets, list) {
> +			if (!strcmp(tname, t->disk->disk_name)) {
> +				nvm_remove_target(t);
> +				ret = 0;
> +				break;
> +			}
> +		}
> +	up_write(&nvm_lock);
> +
> +	if (ret) {
> +		pr_err("nvm: target \"%s\" doesn't exist.\n", tname);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int nvm_configure_add(const char *val)
> +{
> +	struct nvm_dev *dev;
> +	char opcode, devname[DISK_NAME_LEN], tgtengine[255], tname[255];
> +	int lun_begin, lun_end, ret;
> +
> +	ret = sscanf(val, "%c %s %s %s %u:%u", &opcode, devname, tgtengine,
> +						tname, &lun_begin, &lun_end);
> +	if (ret != 6) {
> +		pr_err("nvm: invalid command. Use \"opcode device name tgtengine lun_begin:lun_end\".\n");
> +		return -EINVAL;
> +	}
> +
> +	dev = nvm_find_nvm_dev(devname);
> +	if (!dev) {
> +		pr_err("nvm: device not found\n");
> +		return -EINVAL;
> +	}
> +
> +	if (lun_begin > lun_end || lun_end > dev->nr_luns) {
> +		pr_err("nvm: lun out of bound (%u:%u > %u)\n",
> +					lun_begin, lun_end, dev->nr_luns);
> +		return -EINVAL;
> +	}
> +
> +	return nvm_create_target(dev, tname, tgtengine, lun_begin, lun_end);
> +}
> +
> +/* Exposes administrative interface through /sys/module/lnvm/configure_by_str */
> +static int nvm_configure_by_str_event(const char *val,
> +					const struct kernel_param *kp)
> +{
> +	char opcode;
> +	int ret;
> +
> +	ret = sscanf(val, "%c", &opcode);
> +	if (ret != 1) {
> +		pr_err("nvm: configure must be in the format of \"opcode ...\"\n");
> +		return -EINVAL;
> +	}
> +
> +	switch (opcode) {
> +	case 'a':
> +		return nvm_configure_add(val);
> +	case 'd':
> +		return nvm_configure_del(val);
> +	case 's':
> +		return nvm_configure_show(val);
> +	default:
> +		pr_err("nvm: invalid opcode.\n");
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int nvm_configure_get(char *buf, const struct kernel_param *kp)
> +{
> +	int sz = 0;
> +	char *buf_start = buf;
> +	struct nvm_dev *dev;
> +
> +	buf += sprintf(buf, "available devices:\n");
> +	down_write(&nvm_lock);
> +	list_for_each_entry(dev, &nvm_devices, devices) {
> +		if (sz > 4095 - DISK_NAME_LEN)
> +			break;
> +		buf += sprintf(buf, " %s\n", dev->name);
> +	}
> +	up_write(&nvm_lock);
> +
> +	return buf - buf_start - 1;
> +}
> +
> +static const struct kernel_param_ops nvm_configure_by_str_event_param_ops = {
> +	.set	= nvm_configure_by_str_event,
> +	.get	= nvm_configure_get,
> +};
> +
> +#undef MODULE_PARAM_PREFIX
> +#define MODULE_PARAM_PREFIX	"lnvm."
> +
> +module_param_cb(configure_debug, &nvm_configure_by_str_event_param_ops, NULL,
> +									0644);
> +
> +int nvm_register(struct request_queue *q, char *disk_name,
> +							struct nvm_dev_ops *ops)
> +{
> +	struct nvm_dev *dev;
> +	int ret;
> +
> +	if (!ops->identify || !ops->get_features)
> +		return -EINVAL;
> +
> +	dev = kzalloc(sizeof(struct nvm_dev), GFP_KERNEL);
> +	if (!dev)
> +		return -ENOMEM;
> +
> +	dev->q = q;
> +	dev->ops = ops;
> +	strncpy(dev->name, disk_name, DISK_NAME_LEN);
> +
> +	ret = nvm_init(dev);
> +	if (ret)
> +		goto err_init;
> +
> +	down_write(&nvm_lock);
> +	list_add(&dev->devices, &nvm_devices);
> +	up_write(&nvm_lock);
> +
> +	if (dev->ops->max_phys_sect > 256) {
> +		pr_info("nvm: maximum number of sectors supported in target is 255. max_phys_sect set to 255\n");
> +		dev->ops->max_phys_sect = 255;
> +	}
> +
> +	if (dev->ops->max_phys_sect > 1) {
> +		dev->ppalist_pool = dev->ops->create_ppa_pool(dev->q);
> +		if (!dev->ppalist_pool) {
> +			pr_err("nvm: could not create ppa pool\n");
> +			return -ENOMEM;
> +		}
> +	}
> +
> +	return 0;
> +err_init:
> +	kfree(dev);
> +	return ret;
> +}
> +EXPORT_SYMBOL(nvm_register);
> +
> +void nvm_unregister(char *disk_name)
> +{
> +	struct nvm_dev *dev = nvm_find_nvm_dev(disk_name);
> +
> +	if (!dev) {
> +		pr_err("nvm: could not find device %s on unregister\n",
> +								disk_name);
> +		return;
> +	}
> +
> +	nvm_exit(dev);
> +
> +	down_write(&nvm_lock);
> +	list_del(&dev->devices);
> +	up_write(&nvm_lock);
> +}
> +EXPORT_SYMBOL(nvm_unregister);
> diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
> new file mode 100644
> index 0000000..9654354
> --- /dev/null
> +++ b/include/linux/lightnvm.h
> @@ -0,0 +1,335 @@
> +#ifndef NVM_H
> +#define NVM_H
> +
> +enum {
> +	NVM_IO_OK = 0,
> +	NVM_IO_REQUEUE = 1,
> +	NVM_IO_DONE = 2,
> +	NVM_IO_ERR = 3,
> +
> +	NVM_IOTYPE_NONE = 0,
> +	NVM_IOTYPE_GC = 1,
> +};
> +
> +#ifdef CONFIG_NVM
> +
> +#include <linux/blkdev.h>
> +#include <linux/types.h>
> +#include <linux/file.h>
> +#include <linux/dmapool.h>
> +
> +enum {
> +	/* HW Responsibilities */
> +	NVM_RSP_L2P	= 1 << 0,
> +	NVM_RSP_GC	= 1 << 1,
> +	NVM_RSP_ECC	= 1 << 2,
> +
> +	/* Physical NVM Type */
> +	NVM_NVMT_BLK	= 0,
> +	NVM_NVMT_BYTE	= 1,
> +
> +	/* Internal IO Scheduling algorithm */
> +	NVM_IOSCHED_CHANNEL	= 0,
> +	NVM_IOSCHED_CHIP	= 1,
> +
> +	/* Status codes */
> +	NVM_SUCCESS		= 0,
> +	NVM_RSP_NOT_CHANGEABLE	= 1,
> +};
> +
> +struct nvm_id_chnl {
> +	u64	laddr_begin;
> +	u64	laddr_end;
> +	u32	oob_size;
> +	u32	queue_size;
> +	u32	gran_read;
> +	u32	gran_write;
> +	u32	gran_erase;
> +	u32	t_r;
> +	u32	t_sqr;
> +	u32	t_w;
> +	u32	t_sqw;
> +	u32	t_e;
> +	u16	chnl_parallelism;
> +	u8	io_sched;
> +	u8	res[133];
> +};
> +
> +struct nvm_id {
> +	u8	ver_id;
> +	u8	nvm_type;
> +	u16	nchannels;
> +	struct nvm_id_chnl *chnls;
> +};
> +
> +struct nvm_get_features {
> +	u64	rsp;
> +	u64	ext;
> +};
> +
> +struct nvm_target {
> +	struct list_head list;
> +	struct nvm_tgt_type *type;
> +	struct gendisk *disk;
> +};
> +
> +struct nvm_tgt_instance {
> +	struct nvm_tgt_type *tt;
> +};
> +
> +struct nvm_rq {
> +	struct nvm_tgt_instance *ins;
> +	struct bio *bio;
> +	union {
> +		sector_t ppa;
> +		sector_t *ppa_list;
> +	};
> +	/*DMA handler to be used by underlying devices supporting DMA*/
> +	dma_addr_t dma_ppa_list;
> +	uint8_t npages;
> +};
> +
> +static inline struct nvm_rq *nvm_rq_from_pdu(void *pdu)
> +{
> +	return pdu - sizeof(struct nvm_rq);
> +}
> +
> +static inline void *nvm_rq_to_pdu(struct nvm_rq *rqdata)
> +{
> +	return rqdata + 1;
> +}
> +
> +struct nvm_block;
> +
> +typedef int (nvm_l2p_update_fn)(u64, u64, u64 *, void *);
> +typedef int (nvm_bb_update_fn)(u32, void *, unsigned int, void *);
> +typedef int (nvm_id_fn)(struct request_queue *, struct nvm_id *);
> +typedef int (nvm_get_features_fn)(struct request_queue *,
> +						struct nvm_get_features *);
> +typedef int (nvm_set_rsp_fn)(struct request_queue *, u64);
> +typedef int (nvm_get_l2p_tbl_fn)(struct request_queue *, u64, u64,
> +				nvm_l2p_update_fn *, void *);
> +typedef int (nvm_op_bb_tbl_fn)(struct request_queue *, int, unsigned int,
> +				nvm_bb_update_fn *, void *);
> +typedef int (nvm_submit_io_fn)(struct request_queue *, struct nvm_rq *);
> +typedef int (nvm_erase_blk_fn)(struct request_queue *, sector_t);
> +typedef void *(nvm_create_ppapool_fn)(struct request_queue *);
> +typedef void (nvm_destroy_ppapool_fn)(void *);
> +typedef void *(nvm_alloc_ppalist_fn)(struct request_queue *, void *, gfp_t,
> +								dma_addr_t*);
> +typedef void (nvm_free_ppalist_fn)(void *, void*, dma_addr_t);
> +
> +struct nvm_dev_ops {
> +	nvm_id_fn		*identify;
> +	nvm_get_features_fn	*get_features;
> +	nvm_set_rsp_fn		*set_responsibility;
> +	nvm_get_l2p_tbl_fn	*get_l2p_tbl;
> +	nvm_op_bb_tbl_fn	*set_bb_tbl;
> +	nvm_op_bb_tbl_fn	*get_bb_tbl;
> +
> +	nvm_submit_io_fn	*submit_io;
> +	nvm_erase_blk_fn	*erase_block;
> +
> +	nvm_create_ppapool_fn	*create_ppa_pool;
> +	nvm_destroy_ppapool_fn	*destroy_ppa_pool;
> +	nvm_alloc_ppalist_fn	*alloc_ppalist;
> +	nvm_free_ppalist_fn	*free_ppalist;
> +
> +	uint8_t			max_phys_sect;
> +};
> +
> +struct nvm_lun {
> +	int id;
> +
> +	int nr_pages_per_blk;
> +	unsigned int nr_blocks;		/* end_block - start_block. */
> +	unsigned int nr_free_blocks;	/* Number of unused blocks */
> +
> +	struct nvm_block *blocks;
> +
> +	spinlock_t lock;
> +};
> +
> +struct nvm_block {
> +	struct list_head list;
> +	struct nvm_lun *lun;
> +	unsigned long long id;
> +
> +	void *priv;
> +	int type;
> +};
> +
> +struct nvm_dev {
> +	struct nvm_dev_ops *ops;
> +
> +	struct list_head devices;
> +	struct list_head online_targets;
> +
> +	/* Block manager */
> +	struct nvm_bm_type *bm;
> +	void *bmp;
> +
> +	/* Target information */
> +	int nr_luns;
> +
> +	/* Calculated/Cached values. These do not reflect the actual usable
> +	 * blocks at run-time. */
> +	unsigned long total_pages;
> +	unsigned long total_blocks;
> +	unsigned max_pages_per_blk;
> +
> +	uint32_t sector_size;
> +
> +	void *ppalist_pool;
> +
> +	/* Identity */
> +	struct nvm_id identity;
> +	struct nvm_get_features features;
> +
> +	/* Backend device */
> +	struct request_queue *q;
> +	char name[DISK_NAME_LEN];
> +};
> +
> +typedef void (nvm_tgt_make_rq_fn)(struct request_queue *, struct bio *);
> +typedef sector_t (nvm_tgt_capacity_fn)(void *);
> +typedef void (nvm_tgt_end_io_fn)(struct nvm_rq *, int);
> +typedef void *(nvm_tgt_init_fn)(struct nvm_dev *, struct gendisk *, int, int);
> +typedef void (nvm_tgt_exit_fn)(void *);
> +
> +struct nvm_tgt_type {
> +	const char *name;
> +	unsigned int version[3];
> +
> +	/* target entry points */
> +	nvm_tgt_make_rq_fn *make_rq;
> +	nvm_tgt_capacity_fn *capacity;
> +	nvm_tgt_end_io_fn *end_io;
> +
> +	/* module-specific init/teardown */
> +	nvm_tgt_init_fn *init;
> +	nvm_tgt_exit_fn *exit;
> +
> +	/* For internal use */
> +	struct list_head list;
> +};
> +
> +extern int nvm_register_target(struct nvm_tgt_type *);
> +extern void nvm_unregister_target(struct nvm_tgt_type *);
> +
> +extern void *nvm_alloc_ppalist(struct nvm_dev *, gfp_t, dma_addr_t *);
> +extern void nvm_free_ppalist(struct nvm_dev *, void *, dma_addr_t);
> +
> +typedef int (nvm_bm_register_fn)(struct nvm_dev *);
> +typedef void (nvm_bm_unregister_fn)(struct nvm_dev *);
> +typedef struct nvm_block *(nvm_bm_get_blk_fn)(struct nvm_dev *,
> +					      struct nvm_lun *, unsigned long);
> +typedef void (nvm_bm_put_blk_fn)(struct nvm_dev *, struct nvm_block *);
> +typedef int (nvm_bm_open_blk_fn)(struct nvm_dev *, struct nvm_block *);
> +typedef int (nvm_bm_close_blk_fn)(struct nvm_dev *, struct nvm_block *);
> +typedef void (nvm_bm_flush_blk_fn)(struct nvm_dev *, struct nvm_block *);
> +typedef int (nvm_bm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *);
> +typedef void (nvm_bm_end_io_fn)(struct nvm_rq *, int);
> +typedef int (nvm_bm_erase_blk_fn)(struct nvm_dev *, struct nvm_block *);
> +typedef int (nvm_bm_register_prog_err_fn)(struct nvm_dev *,
> +	     void (prog_err_fn)(struct nvm_dev *, struct nvm_block *));
> +typedef int (nvm_bm_save_state_fn)(struct file *);
> +typedef int (nvm_bm_restore_state_fn)(struct file *);
> +typedef struct nvm_lun *(nvm_bm_get_luns_fn)(struct nvm_dev *, int, int);
> +typedef void (nvm_bm_free_blocks_print_fn)(struct nvm_dev *);
> +
> +struct nvm_bm_type {
> +	const char *name;
> +	unsigned int version[3];
> +
> +	nvm_bm_register_fn *register_bm;
> +	nvm_bm_unregister_fn *unregister_bm;
> +
> +	/* Block administration callbacks */
> +	nvm_bm_get_blk_fn *get_blk;
> +	nvm_bm_put_blk_fn *put_blk;
> +	nvm_bm_open_blk_fn *open_blk;
> +	nvm_bm_close_blk_fn *close_blk;
> +	nvm_bm_flush_blk_fn *flush_blk;
> +
> +	nvm_bm_submit_io_fn *submit_io;
> +	nvm_bm_end_io_fn *end_io;
> +	nvm_bm_erase_blk_fn *erase_blk;
> +
> +	/* State management for debugging purposes */
> +	nvm_bm_save_state_fn *save_state;
> +	nvm_bm_restore_state_fn *restore_state;
> +
> +	/* Configuration management */
> +	nvm_bm_get_luns_fn *get_luns;
> +
> +	/* Statistics */
> +	nvm_bm_free_blocks_print_fn *free_blocks_print;
> +	struct list_head list;
> +};
> +
> +extern int nvm_register_bm(struct nvm_bm_type *);
> +extern void nvm_unregister_bm(struct nvm_bm_type *);
> +
> +extern struct nvm_block *nvm_get_blk(struct nvm_dev *, struct nvm_lun *,
> +								unsigned long);
> +extern void nvm_put_blk(struct nvm_dev *, struct nvm_block *);
> +extern int nvm_erase_blk(struct nvm_dev *, struct nvm_block *);
> +
> +extern int nvm_register(struct request_queue *, char *,
> +						struct nvm_dev_ops *);
> +extern void nvm_unregister(char *);
> +
> +extern int nvm_submit_io(struct nvm_dev *, struct nvm_rq *);
> +
> +/* We currently assume that we the lightnvm device is accepting data in 512
> + * bytes chunks. This should be set to the smallest command size available for a
> + * given device.
> + */
> +#define NVM_SECTOR (512)
> +#define EXPOSED_PAGE_SIZE (4096)
> +
> +#define NR_PHY_IN_LOG (EXPOSED_PAGE_SIZE / NVM_SECTOR)
> +
> +#define NVM_MSG_PREFIX "nvm"
> +#define ADDR_EMPTY (~0ULL)
> +
> +static inline unsigned long nvm_get_rq_flags(struct request *rq)
> +{
> +	return (unsigned long)rq->cmd;
> +}
> +
> +#else /* CONFIG_NVM */
> +
> +struct nvm_dev_ops;
> +struct nvm_dev;
> +struct nvm_lun;
> +struct nvm_block;
> +struct nvm_rq {
> +};
> +struct nvm_tgt_type;
> +struct nvm_tgt_instance;
> +
> +static inline struct nvm_tgt_type *nvm_find_target_type(const char *c)
> +{
> +	return NULL;
> +}
> +static inline int nvm_register(struct request_queue *q, char *disk_name,
> +							struct nvm_dev_ops *ops)
> +{
> +	return -EINVAL;
> +}
> +static inline void nvm_unregister(char *disk_name) {}
> +static inline struct nvm_block *nvm_get_blk(struct nvm_dev *dev,
> +				struct nvm_lun *lun, unsigned long flags)
> +{
> +	return NULL;
> +}
> +static inline void nvm_put_blk(struct nvm_dev *dev, struct nvm_block *blk) {}
> +static inline int nvm_erase_blk(struct nvm_dev *dev, struct nvm_block *blk)
> +{
> +	return -EINVAL;
> +}
> +
> +#endif /* CONFIG_NVM */
> +#endif /* LIGHTNVM.H */
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v7 0/5] Support for Open-Channel SSDs
  2015-08-07 14:29 ` Matias Bjørling
  (?)
@ 2015-09-02  3:50   ` Dongsheng Yang
  -1 siblings, 0 replies; 33+ messages in thread
From: Dongsheng Yang @ 2015-09-02  3:50 UTC (permalink / raw)
  To: Matias Bjørling, hch, axboe, linux-fsdevel, linux-kernel,
	linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

On 08/07/2015 10:29 PM, Matias Bjørling wrote:
> These patches implement support for Open-Channel SSDs.
>
> Applies against axboe's linux-block/for-4.3/drivers and can be found
> in the lkml_v7 branch at https://github.com/OpenChannelSSD/linux
>
> Any feedback is greatly appreciated.

Hi Matias,
	After a reading of your code, that's a great idea.
I tried it with null_nvm and qemu-nvm. I have two questions
here.
	(1), Why we name it lightnvm? IIUC, this framework
can work for other flashes not only NVMe protocol.
	(2), There are gc and bm, but where is the wear leveling?
In hardware?

Thanx
Yang
>
> Changes since v6:
>   - Multipage support (Javier Gonzalez)
>   - General code cleanups
>   - Fixed memleak on register failure
>
> Changes since v5:
> Feedback from Christoph Hellwig.
>   - Created new null_nvm from null_blk to register itself as a lightnvm
>     device.
>   - Changed the interface of register/unregister to only take disk_name.
>     The gendisk alloc in nvme is kept. Most instantiations will
>     involve the device gendisk, therefore wait with refactoring to a
>     later time.
>   - Renamed global parameters in core.c and rrpc.c
>
> Changes since v4:
>   - Remove gendisk->nvm dependency
>   - Remove device driver rq private field dependency.
>   - Update submission and completion. The flow is now
>       Target -> Block Manager -> Device Driver, replacing callbacks in
>       device driver.
>   - Abstracted out the block manager into its own module. Other block
>     managers can now be implemented. For example to support fully
>     host-based SSDs.
>   - No longer exposes the device driver gendisk to user-space.
>   - Management is moved into /sys/modules/lnvm/parameters/configure_debug
>
> Changes since v3:
>
>   - Remove dependency on REQ_NVM_GC
>   - Refactor nvme integration to use nvme_submit_sync_cmd for
>     internal commands.
>   - Fix race condition bug on multiple threads on RRPC target.
>   - Rename sysfs entry under the block device from nvm to lightnvm.
>     The configuration is found in /sys/block/*/lightnvm/
>
> Changes since v2:
>
>   Feedback from Paul Bolle:
>   - Fix license to GPLv2, documentation, compilation.
>   Feedback from Keith Busch:
>   - nvme: Move lightnvm out and into nvme-lightnvm.c.
>   - nvme: Set controller css on lightnvm command set.
>   - nvme: Remove OACS.
>   Feedback from Christoph Hellwig:
>   - lightnvm: Move out of block layer into /drivers/lightnvm/core.c
>   - lightnvm: refactor request->phys_sector into device drivers.
>   - lightnvm: refactor prep/unprep into device drivers.
>   - lightnvm: move nvm_dev from request_queue to gendisk.
>
>   New
>   - Bad block table support (From Javier).
>   - Update maintainers file.
>
> Changes since v1:
>
>   - Splitted LightNVM into two parts. A get/put interface for flash
>     blocks and the respective targets that implement flash translation
>     layer logic.
>   - Updated the patches according to the LightNVM specification changes.
>   - Added interface to add/remove targets for a block device.
>
> Thanks to Jens Axboe, Christoph Hellwig, Keith Busch, Paul Bolle,
> Javier Gonzalez and Jesper Madsen for discussions and contributions.
>
> Matias Bjørling (5):
>    lightnvm: Support for Open-Channel SSDs
>    lightnvm: Hybrid Open-Channel SSD RRPC target
>    lightnvm: Hybrid Open-Channel SSD block manager
>    null_nvm: Lightnvm test driver
>    nvme: LightNVM support
>
>   MAINTAINERS                   |    8 +
>   drivers/Kconfig               |    2 +
>   drivers/Makefile              |    5 +
>   drivers/block/Makefile        |    2 +-
>   drivers/block/nvme-core.c     |   23 +-
>   drivers/block/nvme-lightnvm.c |  568 ++++++++++++++++++
>   drivers/lightnvm/Kconfig      |   36 ++
>   drivers/lightnvm/Makefile     |    8 +
>   drivers/lightnvm/bm_hb.c      |  366 ++++++++++++
>   drivers/lightnvm/bm_hb.h      |   46 ++
>   drivers/lightnvm/core.c       |  591 +++++++++++++++++++
>   drivers/lightnvm/null_nvm.c   |  481 +++++++++++++++
>   drivers/lightnvm/rrpc.c       | 1296 +++++++++++++++++++++++++++++++++++++++++
>   drivers/lightnvm/rrpc.h       |  236 ++++++++
>   include/linux/lightnvm.h      |  334 +++++++++++
>   include/linux/nvme.h          |    6 +
>   include/uapi/linux/nvme.h     |    3 +
>   17 files changed, 4007 insertions(+), 4 deletions(-)
>   create mode 100644 drivers/block/nvme-lightnvm.c
>   create mode 100644 drivers/lightnvm/Kconfig
>   create mode 100644 drivers/lightnvm/Makefile
>   create mode 100644 drivers/lightnvm/bm_hb.c
>   create mode 100644 drivers/lightnvm/bm_hb.h
>   create mode 100644 drivers/lightnvm/core.c
>   create mode 100644 drivers/lightnvm/null_nvm.c
>   create mode 100644 drivers/lightnvm/rrpc.c
>   create mode 100644 drivers/lightnvm/rrpc.h
>   create mode 100644 include/linux/lightnvm.h
>


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v7 0/5] Support for Open-Channel SSDs
@ 2015-09-02  3:50   ` Dongsheng Yang
  0 siblings, 0 replies; 33+ messages in thread
From: Dongsheng Yang @ 2015-09-02  3:50 UTC (permalink / raw)
  To: Matias Bjørling, hch, axboe, linux-fsdevel, linux-kernel,
	linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

On 08/07/2015 10:29 PM, Matias Bjørling wrote:
> These patches implement support for Open-Channel SSDs.
>
> Applies against axboe's linux-block/for-4.3/drivers and can be found
> in the lkml_v7 branch at https://github.com/OpenChannelSSD/linux
>
> Any feedback is greatly appreciated.

Hi Matias,
	After a reading of your code, that's a great idea.
I tried it with null_nvm and qemu-nvm. I have two questions
here.
	(1), Why we name it lightnvm? IIUC, this framework
can work for other flashes not only NVMe protocol.
	(2), There are gc and bm, but where is the wear leveling?
In hardware?

Thanx
Yang
>
> Changes since v6:
>   - Multipage support (Javier Gonzalez)
>   - General code cleanups
>   - Fixed memleak on register failure
>
> Changes since v5:
> Feedback from Christoph Hellwig.
>   - Created new null_nvm from null_blk to register itself as a lightnvm
>     device.
>   - Changed the interface of register/unregister to only take disk_name.
>     The gendisk alloc in nvme is kept. Most instantiations will
>     involve the device gendisk, therefore wait with refactoring to a
>     later time.
>   - Renamed global parameters in core.c and rrpc.c
>
> Changes since v4:
>   - Remove gendisk->nvm dependency
>   - Remove device driver rq private field dependency.
>   - Update submission and completion. The flow is now
>       Target -> Block Manager -> Device Driver, replacing callbacks in
>       device driver.
>   - Abstracted out the block manager into its own module. Other block
>     managers can now be implemented. For example to support fully
>     host-based SSDs.
>   - No longer exposes the device driver gendisk to user-space.
>   - Management is moved into /sys/modules/lnvm/parameters/configure_debug
>
> Changes since v3:
>
>   - Remove dependency on REQ_NVM_GC
>   - Refactor nvme integration to use nvme_submit_sync_cmd for
>     internal commands.
>   - Fix race condition bug on multiple threads on RRPC target.
>   - Rename sysfs entry under the block device from nvm to lightnvm.
>     The configuration is found in /sys/block/*/lightnvm/
>
> Changes since v2:
>
>   Feedback from Paul Bolle:
>   - Fix license to GPLv2, documentation, compilation.
>   Feedback from Keith Busch:
>   - nvme: Move lightnvm out and into nvme-lightnvm.c.
>   - nvme: Set controller css on lightnvm command set.
>   - nvme: Remove OACS.
>   Feedback from Christoph Hellwig:
>   - lightnvm: Move out of block layer into /drivers/lightnvm/core.c
>   - lightnvm: refactor request->phys_sector into device drivers.
>   - lightnvm: refactor prep/unprep into device drivers.
>   - lightnvm: move nvm_dev from request_queue to gendisk.
>
>   New
>   - Bad block table support (From Javier).
>   - Update maintainers file.
>
> Changes since v1:
>
>   - Splitted LightNVM into two parts. A get/put interface for flash
>     blocks and the respective targets that implement flash translation
>     layer logic.
>   - Updated the patches according to the LightNVM specification changes.
>   - Added interface to add/remove targets for a block device.
>
> Thanks to Jens Axboe, Christoph Hellwig, Keith Busch, Paul Bolle,
> Javier Gonzalez and Jesper Madsen for discussions and contributions.
>
> Matias Bjørling (5):
>    lightnvm: Support for Open-Channel SSDs
>    lightnvm: Hybrid Open-Channel SSD RRPC target
>    lightnvm: Hybrid Open-Channel SSD block manager
>    null_nvm: Lightnvm test driver
>    nvme: LightNVM support
>
>   MAINTAINERS                   |    8 +
>   drivers/Kconfig               |    2 +
>   drivers/Makefile              |    5 +
>   drivers/block/Makefile        |    2 +-
>   drivers/block/nvme-core.c     |   23 +-
>   drivers/block/nvme-lightnvm.c |  568 ++++++++++++++++++
>   drivers/lightnvm/Kconfig      |   36 ++
>   drivers/lightnvm/Makefile     |    8 +
>   drivers/lightnvm/bm_hb.c      |  366 ++++++++++++
>   drivers/lightnvm/bm_hb.h      |   46 ++
>   drivers/lightnvm/core.c       |  591 +++++++++++++++++++
>   drivers/lightnvm/null_nvm.c   |  481 +++++++++++++++
>   drivers/lightnvm/rrpc.c       | 1296 +++++++++++++++++++++++++++++++++++++++++
>   drivers/lightnvm/rrpc.h       |  236 ++++++++
>   include/linux/lightnvm.h      |  334 +++++++++++
>   include/linux/nvme.h          |    6 +
>   include/uapi/linux/nvme.h     |    3 +
>   17 files changed, 4007 insertions(+), 4 deletions(-)
>   create mode 100644 drivers/block/nvme-lightnvm.c
>   create mode 100644 drivers/lightnvm/Kconfig
>   create mode 100644 drivers/lightnvm/Makefile
>   create mode 100644 drivers/lightnvm/bm_hb.c
>   create mode 100644 drivers/lightnvm/bm_hb.h
>   create mode 100644 drivers/lightnvm/core.c
>   create mode 100644 drivers/lightnvm/null_nvm.c
>   create mode 100644 drivers/lightnvm/rrpc.c
>   create mode 100644 drivers/lightnvm/rrpc.h
>   create mode 100644 include/linux/lightnvm.h
>

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH v7 0/5] Support for Open-Channel SSDs
@ 2015-09-02  3:50   ` Dongsheng Yang
  0 siblings, 0 replies; 33+ messages in thread
From: Dongsheng Yang @ 2015-09-02  3:50 UTC (permalink / raw)


On 08/07/2015 10:29 PM, Matias Bj?rling wrote:
> These patches implement support for Open-Channel SSDs.
>
> Applies against axboe's linux-block/for-4.3/drivers and can be found
> in the lkml_v7 branch at https://github.com/OpenChannelSSD/linux
>
> Any feedback is greatly appreciated.

Hi Matias,
	After a reading of your code, that's a great idea.
I tried it with null_nvm and qemu-nvm. I have two questions
here.
	(1), Why we name it lightnvm? IIUC, this framework
can work for other flashes not only NVMe protocol.
	(2), There are gc and bm, but where is the wear leveling?
In hardware?

Thanx
Yang
>
> Changes since v6:
>   - Multipage support (Javier Gonzalez)
>   - General code cleanups
>   - Fixed memleak on register failure
>
> Changes since v5:
> Feedback from Christoph Hellwig.
>   - Created new null_nvm from null_blk to register itself as a lightnvm
>     device.
>   - Changed the interface of register/unregister to only take disk_name.
>     The gendisk alloc in nvme is kept. Most instantiations will
>     involve the device gendisk, therefore wait with refactoring to a
>     later time.
>   - Renamed global parameters in core.c and rrpc.c
>
> Changes since v4:
>   - Remove gendisk->nvm dependency
>   - Remove device driver rq private field dependency.
>   - Update submission and completion. The flow is now
>       Target -> Block Manager -> Device Driver, replacing callbacks in
>       device driver.
>   - Abstracted out the block manager into its own module. Other block
>     managers can now be implemented. For example to support fully
>     host-based SSDs.
>   - No longer exposes the device driver gendisk to user-space.
>   - Management is moved into /sys/modules/lnvm/parameters/configure_debug
>
> Changes since v3:
>
>   - Remove dependency on REQ_NVM_GC
>   - Refactor nvme integration to use nvme_submit_sync_cmd for
>     internal commands.
>   - Fix race condition bug on multiple threads on RRPC target.
>   - Rename sysfs entry under the block device from nvm to lightnvm.
>     The configuration is found in /sys/block/*/lightnvm/
>
> Changes since v2:
>
>   Feedback from Paul Bolle:
>   - Fix license to GPLv2, documentation, compilation.
>   Feedback from Keith Busch:
>   - nvme: Move lightnvm out and into nvme-lightnvm.c.
>   - nvme: Set controller css on lightnvm command set.
>   - nvme: Remove OACS.
>   Feedback from Christoph Hellwig:
>   - lightnvm: Move out of block layer into /drivers/lightnvm/core.c
>   - lightnvm: refactor request->phys_sector into device drivers.
>   - lightnvm: refactor prep/unprep into device drivers.
>   - lightnvm: move nvm_dev from request_queue to gendisk.
>
>   New
>   - Bad block table support (From Javier).
>   - Update maintainers file.
>
> Changes since v1:
>
>   - Splitted LightNVM into two parts. A get/put interface for flash
>     blocks and the respective targets that implement flash translation
>     layer logic.
>   - Updated the patches according to the LightNVM specification changes.
>   - Added interface to add/remove targets for a block device.
>
> Thanks to Jens Axboe, Christoph Hellwig, Keith Busch, Paul Bolle,
> Javier Gonzalez and Jesper Madsen for discussions and contributions.
>
> Matias Bj?rling (5):
>    lightnvm: Support for Open-Channel SSDs
>    lightnvm: Hybrid Open-Channel SSD RRPC target
>    lightnvm: Hybrid Open-Channel SSD block manager
>    null_nvm: Lightnvm test driver
>    nvme: LightNVM support
>
>   MAINTAINERS                   |    8 +
>   drivers/Kconfig               |    2 +
>   drivers/Makefile              |    5 +
>   drivers/block/Makefile        |    2 +-
>   drivers/block/nvme-core.c     |   23 +-
>   drivers/block/nvme-lightnvm.c |  568 ++++++++++++++++++
>   drivers/lightnvm/Kconfig      |   36 ++
>   drivers/lightnvm/Makefile     |    8 +
>   drivers/lightnvm/bm_hb.c      |  366 ++++++++++++
>   drivers/lightnvm/bm_hb.h      |   46 ++
>   drivers/lightnvm/core.c       |  591 +++++++++++++++++++
>   drivers/lightnvm/null_nvm.c   |  481 +++++++++++++++
>   drivers/lightnvm/rrpc.c       | 1296 +++++++++++++++++++++++++++++++++++++++++
>   drivers/lightnvm/rrpc.h       |  236 ++++++++
>   include/linux/lightnvm.h      |  334 +++++++++++
>   include/linux/nvme.h          |    6 +
>   include/uapi/linux/nvme.h     |    3 +
>   17 files changed, 4007 insertions(+), 4 deletions(-)
>   create mode 100644 drivers/block/nvme-lightnvm.c
>   create mode 100644 drivers/lightnvm/Kconfig
>   create mode 100644 drivers/lightnvm/Makefile
>   create mode 100644 drivers/lightnvm/bm_hb.c
>   create mode 100644 drivers/lightnvm/bm_hb.h
>   create mode 100644 drivers/lightnvm/core.c
>   create mode 100644 drivers/lightnvm/null_nvm.c
>   create mode 100644 drivers/lightnvm/rrpc.c
>   create mode 100644 drivers/lightnvm/rrpc.h
>   create mode 100644 include/linux/lightnvm.h
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v7 1/5] lightnvm: Support for Open-Channel SSDs
  2015-09-02  3:50     ` Dongsheng Yang
@ 2015-09-02 10:48       ` Matias Bjørling
  -1 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-09-02 10:48 UTC (permalink / raw)
  To: Dongsheng Yang, hch, axboe, linux-fsdevel, linux-kernel, linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

>> +
>> +    /* register with device with a supported BM */
>> +    list_for_each_entry(bt, &nvm_bms, list) {
>> +        ret = bt->register_bm(dev);
>> +        if (ret < 0)
>> +            goto err; /* initialization failed */
>> +        if (ret > 0) {
>> +            dev->bm = bt;
>> +            break; /* successfully initialized */
>> +        }
>> +    }
>
> Why just search it from head to tail? Can user specific it
> in nvm_create_target()?

Hi Yang,

Currently only the rrpc and a couple of out of tree block managers are 
built. The register_bm only tries to find a block manager that supports 
the device, when it finds it, that  one is initialized. It is an open 
question on how we choose the right block manager, e.g. a proprietary 
and a open-source block manager is in place. Priorities might be a way 
to go? or mark certain block managers as a catch all?

Hopefully we will get away with only a single or two block managers in 
the future, so we won't have one for each type of device.

>> +
>> +    if (!ret) {
>> +        pr_info("nvm: no compatible bm was found.\n");
>> +        return 0;
>> +    }
>
> If we allow nvm_device registered with no bm, we would get
> a NULL pointer reference problem in later using.
>

Yes, definitely. In the care that happens, I envision it should be 
possible to register a block manager after a device is loaded, and then 
any outstanding devices (which does not have a registered block 
manager), will be probed again.

> As mentioned above, why we have to choose bm for nvm in nvm_register?

Without a block manager, we don't know the structure of the device and 
how to interact with it. I want to initialize that as soon as possible. 
So that layers on top can start interacting.

>
> Thanx
> Yang

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH v7 1/5] lightnvm: Support for Open-Channel SSDs
@ 2015-09-02 10:48       ` Matias Bjørling
  0 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-09-02 10:48 UTC (permalink / raw)


>> +
>> +    /* register with device with a supported BM */
>> +    list_for_each_entry(bt, &nvm_bms, list) {
>> +        ret = bt->register_bm(dev);
>> +        if (ret < 0)
>> +            goto err; /* initialization failed */
>> +        if (ret > 0) {
>> +            dev->bm = bt;
>> +            break; /* successfully initialized */
>> +        }
>> +    }
>
> Why just search it from head to tail? Can user specific it
> in nvm_create_target()?

Hi Yang,

Currently only the rrpc and a couple of out of tree block managers are 
built. The register_bm only tries to find a block manager that supports 
the device, when it finds it, that  one is initialized. It is an open 
question on how we choose the right block manager, e.g. a proprietary 
and a open-source block manager is in place. Priorities might be a way 
to go? or mark certain block managers as a catch all?

Hopefully we will get away with only a single or two block managers in 
the future, so we won't have one for each type of device.

>> +
>> +    if (!ret) {
>> +        pr_info("nvm: no compatible bm was found.\n");
>> +        return 0;
>> +    }
>
> If we allow nvm_device registered with no bm, we would get
> a NULL pointer reference problem in later using.
>

Yes, definitely. In the care that happens, I envision it should be 
possible to register a block manager after a device is loaded, and then 
any outstanding devices (which does not have a registered block 
manager), will be probed again.

> As mentioned above, why we have to choose bm for nvm in nvm_register?

Without a block manager, we don't know the structure of the device and 
how to interact with it. I want to initialize that as soon as possible. 
So that layers on top can start interacting.

>
> Thanx
> Yang

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v7 0/5] Support for Open-Channel SSDs
  2015-09-02  3:50   ` Dongsheng Yang
@ 2015-09-02 10:59     ` Matias Bjørling
  -1 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-09-02 10:59 UTC (permalink / raw)
  To: Dongsheng Yang, hch, axboe, linux-fsdevel, linux-kernel, linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

>> Any feedback is greatly appreciated.
>
> Hi Matias,
>      After a reading of your code, that's a great idea.
> I tried it with null_nvm and qemu-nvm. I have two questions
> here.

Hi Yang, thanks for taking a look. I appreciate it.

>      (1), Why we name it lightnvm? IIUC, this framework
> can work for other flashes not only NVMe protocol.

Indeed, there are people that work on using it with rapidio. It can also 
work with SATA/SAS, etc.

The lightnvm name came from the technique to offload devices (which 
contains non-volatile memory) so they only care about managing the 
media. In that sense "light" nvm. I'm open to other suggestions. I 
really wanted the OpenNVM / OpenSSD name, but they where already taken.

>      (2), There are gc and bm, but where is the wear leveling?
> In hardware?

It should be implemented within each target. The rrpc module implements 
it within its gc routines. Currently rrpc only looks at the least about 
of invalid pages. The PE cycles should also be taken into account. 
Properly some weighted function to decide the cost. Similar to the 
cost-based gc used in the DFTL paper.

>
> Thanx
> Yang

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH v7 0/5] Support for Open-Channel SSDs
@ 2015-09-02 10:59     ` Matias Bjørling
  0 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-09-02 10:59 UTC (permalink / raw)


>> Any feedback is greatly appreciated.
>
> Hi Matias,
>      After a reading of your code, that's a great idea.
> I tried it with null_nvm and qemu-nvm. I have two questions
> here.

Hi Yang, thanks for taking a look. I appreciate it.

>      (1), Why we name it lightnvm? IIUC, this framework
> can work for other flashes not only NVMe protocol.

Indeed, there are people that work on using it with rapidio. It can also 
work with SATA/SAS, etc.

The lightnvm name came from the technique to offload devices (which 
contains non-volatile memory) so they only care about managing the 
media. In that sense "light" nvm. I'm open to other suggestions. I 
really wanted the OpenNVM / OpenSSD name, but they where already taken.

>      (2), There are gc and bm, but where is the wear leveling?
> In hardware?

It should be implemented within each target. The rrpc module implements 
it within its gc routines. Currently rrpc only looks at the least about 
of invalid pages. The PE cycles should also be taken into account. 
Properly some weighted function to decide the cost. Similar to the 
cost-based gc used in the DFTL paper.

>
> Thanx
> Yang

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v7 1/5] lightnvm: Support for Open-Channel SSDs
  2015-09-02 10:48       ` Matias Bjørling
  (?)
@ 2015-09-04  7:06         ` Dongsheng Yang
  -1 siblings, 0 replies; 33+ messages in thread
From: Dongsheng Yang @ 2015-09-04  7:06 UTC (permalink / raw)
  To: Matias Bjørling, hch, axboe, linux-fsdevel, linux-kernel,
	linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

On 09/02/2015 06:48 PM, Matias Bjørling wrote:
>>> +
>>> +    /* register with device with a supported BM */
>>> +    list_for_each_entry(bt, &nvm_bms, list) {
>>> +        ret = bt->register_bm(dev);
>>> +        if (ret < 0)
>>> +            goto err; /* initialization failed */
>>> +        if (ret > 0) {
>>> +            dev->bm = bt;
>>> +            break; /* successfully initialized */
>>> +        }
>>> +    }
>>
>> Why just search it from head to tail? Can user specific it
>> in nvm_create_target()?
>
> Hi Yang,
>
> Currently only the rrpc and a couple of out of tree block managers are
> built. The register_bm only tries to find a block manager that supports
> the device, when it finds it, that  one is initialized. It is an open
> question on how we choose the right block manager, e.g. a proprietary
> and a open-source block manager is in place. Priorities might be a way
> to go? or mark certain block managers as a catch all?
>
> Hopefully we will get away with only a single or two block managers in
> the future, so we won't have one for each type of device.
>
>>> +
>>> +    if (!ret) {
>>> +        pr_info("nvm: no compatible bm was found.\n");
>>> +        return 0;
>>> +    }
>>
>> If we allow nvm_device registered with no bm, we would get
>> a NULL pointer reference problem in later using.
>>
>
> Yes, definitely.

So here is a suggestion, register_bm again
if we found nvm_dev->bm == NULL in create_target(). And if it is still
NULL after that. return an error "nvm: no compatible bm was found"
and stop target creating. Otherwise, there would be a NULL Pointer
reference problem.

That's a real problem I met in my testing and I did this change
in my local using. I hope that's useful to you.

Thanx
Yang
> In the care that happens, I envision it should be
> possible to register a block manager after a device is loaded, and then
> any outstanding devices (which does not have a registered block
> manager), will be probed again.
>
>> As mentioned above, why we have to choose bm for nvm in nvm_register?
>
> Without a block manager, we don't know the structure of the device and
> how to interact with it. I want to initialize that as soon as possible.
> So that layers on top can start interacting.
>
>>
>> Thanx
>> Yang
> .
>


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v7 1/5] lightnvm: Support for Open-Channel SSDs
@ 2015-09-04  7:06         ` Dongsheng Yang
  0 siblings, 0 replies; 33+ messages in thread
From: Dongsheng Yang @ 2015-09-04  7:06 UTC (permalink / raw)
  To: Matias Bjørling, hch, axboe, linux-fsdevel, linux-kernel,
	linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

On 09/02/2015 06:48 PM, Matias Bjørling wrote:
>>> +
>>> +    /* register with device with a supported BM */
>>> +    list_for_each_entry(bt, &nvm_bms, list) {
>>> +        ret = bt->register_bm(dev);
>>> +        if (ret < 0)
>>> +            goto err; /* initialization failed */
>>> +        if (ret > 0) {
>>> +            dev->bm = bt;
>>> +            break; /* successfully initialized */
>>> +        }
>>> +    }
>>
>> Why just search it from head to tail? Can user specific it
>> in nvm_create_target()?
>
> Hi Yang,
>
> Currently only the rrpc and a couple of out of tree block managers are
> built. The register_bm only tries to find a block manager that supports
> the device, when it finds it, that  one is initialized. It is an open
> question on how we choose the right block manager, e.g. a proprietary
> and a open-source block manager is in place. Priorities might be a way
> to go? or mark certain block managers as a catch all?
>
> Hopefully we will get away with only a single or two block managers in
> the future, so we won't have one for each type of device.
>
>>> +
>>> +    if (!ret) {
>>> +        pr_info("nvm: no compatible bm was found.\n");
>>> +        return 0;
>>> +    }
>>
>> If we allow nvm_device registered with no bm, we would get
>> a NULL pointer reference problem in later using.
>>
>
> Yes, definitely.

So here is a suggestion, register_bm again
if we found nvm_dev->bm == NULL in create_target(). And if it is still
NULL after that. return an error "nvm: no compatible bm was found"
and stop target creating. Otherwise, there would be a NULL Pointer
reference problem.

That's a real problem I met in my testing and I did this change
in my local using. I hope that's useful to you.

Thanx
Yang
> In the care that happens, I envision it should be
> possible to register a block manager after a device is loaded, and then
> any outstanding devices (which does not have a registered block
> manager), will be probed again.
>
>> As mentioned above, why we have to choose bm for nvm in nvm_register?
>
> Without a block manager, we don't know the structure of the device and
> how to interact with it. I want to initialize that as soon as possible.
> So that layers on top can start interacting.
>
>>
>> Thanx
>> Yang
> .
>

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH v7 1/5] lightnvm: Support for Open-Channel SSDs
@ 2015-09-04  7:06         ` Dongsheng Yang
  0 siblings, 0 replies; 33+ messages in thread
From: Dongsheng Yang @ 2015-09-04  7:06 UTC (permalink / raw)


On 09/02/2015 06:48 PM, Matias Bj?rling wrote:
>>> +
>>> +    /* register with device with a supported BM */
>>> +    list_for_each_entry(bt, &nvm_bms, list) {
>>> +        ret = bt->register_bm(dev);
>>> +        if (ret < 0)
>>> +            goto err; /* initialization failed */
>>> +        if (ret > 0) {
>>> +            dev->bm = bt;
>>> +            break; /* successfully initialized */
>>> +        }
>>> +    }
>>
>> Why just search it from head to tail? Can user specific it
>> in nvm_create_target()?
>
> Hi Yang,
>
> Currently only the rrpc and a couple of out of tree block managers are
> built. The register_bm only tries to find a block manager that supports
> the device, when it finds it, that  one is initialized. It is an open
> question on how we choose the right block manager, e.g. a proprietary
> and a open-source block manager is in place. Priorities might be a way
> to go? or mark certain block managers as a catch all?
>
> Hopefully we will get away with only a single or two block managers in
> the future, so we won't have one for each type of device.
>
>>> +
>>> +    if (!ret) {
>>> +        pr_info("nvm: no compatible bm was found.\n");
>>> +        return 0;
>>> +    }
>>
>> If we allow nvm_device registered with no bm, we would get
>> a NULL pointer reference problem in later using.
>>
>
> Yes, definitely.

So here is a suggestion, register_bm again
if we found nvm_dev->bm == NULL in create_target(). And if it is still
NULL after that. return an error "nvm: no compatible bm was found"
and stop target creating. Otherwise, there would be a NULL Pointer
reference problem.

That's a real problem I met in my testing and I did this change
in my local using. I hope that's useful to you.

Thanx
Yang
> In the care that happens, I envision it should be
> possible to register a block manager after a device is loaded, and then
> any outstanding devices (which does not have a registered block
> manager), will be probed again.
>
>> As mentioned above, why we have to choose bm for nvm in nvm_register?
>
> Without a block manager, we don't know the structure of the device and
> how to interact with it. I want to initialize that as soon as possible.
> So that layers on top can start interacting.
>
>>
>> Thanx
>> Yang
> .
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v7 1/5] lightnvm: Support for Open-Channel SSDs
  2015-09-04  7:06         ` Dongsheng Yang
@ 2015-09-04  8:05           ` Matias Bjørling
  -1 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-09-04  8:05 UTC (permalink / raw)
  To: Dongsheng Yang, hch, axboe, linux-fsdevel, linux-kernel, linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

>
> So here is a suggestion, register_bm again
> if we found nvm_dev->bm == NULL in create_target(). And if it is still
> NULL after that. return an error "nvm: no compatible bm was found"
> and stop target creating. Otherwise, there would be a NULL Pointer
> reference problem.
>
> That's a real problem I met in my testing and I did this change
> in my local using. I hope that's useful to you.
>
Hi Yang,

Similar to this?

diff --git i/drivers/lightnvm/core.c w/drivers/lightnvm/core.c
index 5e4c2b8..0d2e5e3 100644
--- i/drivers/lightnvm/core.c
+++ w/drivers/lightnvm/core.c
@@ -262,8 +262,9 @@ int nvm_init(struct nvm_dev *dev)
         }

         if (!ret) {
-               pr_info("nvm: no compatible bm was found.\n");
-               return 0;
+               pr_info("nvm: %s was not initialized due to no 
compatible bm.\n",
+                                                               dev->name);
+               return -EINVAL;
         }

         pr_info("nvm: registered %s with luns: %u blocks: %lu sector 
size: %d\n",




^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v7 1/5] lightnvm: Support for Open-Channel SSDs
@ 2015-09-04  8:05           ` Matias Bjørling
  0 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-09-04  8:05 UTC (permalink / raw)


>
> So here is a suggestion, register_bm again
> if we found nvm_dev->bm == NULL in create_target(). And if it is still
> NULL after that. return an error "nvm: no compatible bm was found"
> and stop target creating. Otherwise, there would be a NULL Pointer
> reference problem.
>
> That's a real problem I met in my testing and I did this change
> in my local using. I hope that's useful to you.
>
Hi Yang,

Similar to this?

diff --git i/drivers/lightnvm/core.c w/drivers/lightnvm/core.c
index 5e4c2b8..0d2e5e3 100644
--- i/drivers/lightnvm/core.c
+++ w/drivers/lightnvm/core.c
@@ -262,8 +262,9 @@ int nvm_init(struct nvm_dev *dev)
         }

         if (!ret) {
-               pr_info("nvm: no compatible bm was found.\n");
-               return 0;
+               pr_info("nvm: %s was not initialized due to no 
compatible bm.\n",
+                                                               dev->name);
+               return -EINVAL;
         }

         pr_info("nvm: registered %s with luns: %u blocks: %lu sector 
size: %d\n",

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH v7 1/5] lightnvm: Support for Open-Channel SSDs
  2015-09-04  8:05           ` Matias Bjørling
@ 2015-09-04  8:27             ` Dongsheng Yang
  -1 siblings, 0 replies; 33+ messages in thread
From: Dongsheng Yang @ 2015-09-04  8:27 UTC (permalink / raw)
  To: Matias Bjørling, hch, axboe, linux-fsdevel, linux-kernel,
	linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

[-- Attachment #1: Type: text/plain, Size: 1237 bytes --]

On 09/04/2015 04:05 PM, Matias Bjørling wrote:
>>
>> So here is a suggestion, register_bm again
>> if we found nvm_dev->bm == NULL in create_target(). And if it is still
>> NULL after that. return an error "nvm: no compatible bm was found"
>> and stop target creating. Otherwise, there would be a NULL Pointer
>> reference problem.
>>
>> That's a real problem I met in my testing and I did this change
>> in my local using. I hope that's useful to you.
>>
> Hi Yang,
> ac
> Similar to this?

Okey, I attached two changes in my local using. I hope that
useful to you.

Yang
>
> diff --git i/drivers/lightnvm/core.c w/drivers/lightnvm/core.c
> index 5e4c2b8..0d2e5e3 100644
> --- i/drivers/lightnvm/core.c
> +++ w/drivers/lightnvm/core.c
> @@ -262,8 +262,9 @@ int nvm_init(struct nvm_dev *dev)
>          }
>
>          if (!ret) {
> -               pr_info("nvm: no compatible bm was found.\n");
> -               return 0;
> +               pr_info("nvm: %s was not initialized due to no
> compatible bm.\n",
> +                                                               dev->name);
> +               return -EINVAL;
>          }
>
>          pr_info("nvm: registered %s with luns: %u blocks: %lu sector
> size: %d\n",
>
>
>
> .
>


[-- Attachment #2: 0002-lightNVM-register-bm-in-nvm_create_target-if-dev-bm-.patch --]
[-- Type: text/x-patch, Size: 1558 bytes --]

>From 2060232d379328679b22753587d16249f01fa219 Mon Sep 17 00:00:00 2001
From: Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Date: Fri, 4 Sep 2015 08:10:13 +0900
Subject: [PATCH 2/2] lightNVM: register bm in nvm_create_target if dev->bm is
 NULL

When we create target, we need to make sure dev->bm is not NULL.
If it's NULL try to register bm again. If we still fail to find
a proper bm for this dev, return error rather than continue to
provide a NULL pointer dereference problem later.

Signed-off-by: Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
---
 drivers/lightnvm/core.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 5e4c2b8..9c75ea4 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -293,10 +293,30 @@ static int nvm_create_target(struct nvm_dev *dev, char *ttname, char *tname,
 						int lun_begin, int lun_end)
 {
 	struct request_queue *tqueue;
+	struct nvm_bm_type *bt;
 	struct gendisk *tdisk;
 	struct nvm_tgt_type *tt;
 	struct nvm_target *t;
 	void *targetdata;
+	int ret = 0;
+
+	if (!dev->bm) {
+		/* register with device with a supported BM */
+		list_for_each_entry(bt, &nvm_bms, list) {
+			ret = bt->register_bm(dev);
+			if (ret < 0)
+				return ret; /* initialization failed */
+			if (ret > 0) {
+				dev->bm = bt;
+				break; /* successfully initialized */
+			}
+		}
+
+		if (!ret) {
+			pr_info("nvm: no compatible bm was found.\n");
+			return -ENODEV;
+		}
+	}
 
 	tt = nvm_find_target_type(ttname);
 	if (!tt) {
-- 
1.8.4.2


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #3: 0001-lightNVM-fix-a-compatibility-problem-in-compiling.patch --]
[-- Type: text/x-patch; name="0001-lightNVM-fix-a-compatibility-problem-in-compiling.patch", Size: 5428 bytes --]

>From 699d279ee0dbf3db5a4e7a78d52fb93e954294a1 Mon Sep 17 00:00:00 2001
From: Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
Date: Mon, 31 Aug 2015 17:22:23 -0400
Subject: [PATCH 1/2] lightNVM: fix a compatibility problem in compiling.
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

In some old gcc version, such as [gcc version 4.4.7 20120313 (Red Hat 4.4.7-4)]
there is a compiling error with this kind of code:

struct test {
	union {
		int data;
	};
};

int main()
{
        struct test ins = {
                .data = 1,
        };
        return 0;
}

 # gcc test.c
 # test.c: In function ‘main’:
 # test.c:12: error: unknown field ‘data’ specified in initializer

This patch fix this problem to initialize it in a compatible way.

Signed-off-by: Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
---
 drivers/block/nvme-lightnvm.c | 58 +++++++++++++++++++------------------------
 1 file changed, 26 insertions(+), 32 deletions(-)

diff --git a/drivers/block/nvme-lightnvm.c b/drivers/block/nvme-lightnvm.c
index 8ad84c9..d1dbc67 100644
--- a/drivers/block/nvme-lightnvm.c
+++ b/drivers/block/nvme-lightnvm.c
@@ -184,13 +184,13 @@ static int init_chnls(struct request_queue *q, struct nvm_id *nvm_id,
 	struct nvme_nvm_id_chnl *src = nvme_nvm_id->chnls;
 	struct nvm_id_chnl *dst = nvm_id->chnls;
 	struct nvme_ns *ns = q->queuedata;
-	struct nvme_nvm_command c = {
-		.nvm_identify.opcode = nvme_nvm_admin_identify,
-		.nvm_identify.nsid = cpu_to_le32(ns->ns_id),
-	};
+	struct nvme_nvm_command c = {};
 	unsigned int len = nvm_id->nchannels;
 	int i, end, ret, off = 0;
 
+	c.nvm_identify.opcode = nvme_nvm_admin_identify;
+	c.nvm_identify.nsid = cpu_to_le32(ns->ns_id);
+
 	while (len) {
 		end = min_t(u32, NVME_NVM_CHNLS_PR_REQ, len);
 
@@ -230,13 +230,12 @@ static int nvme_nvm_identify(struct request_queue *q, struct nvm_id *nvm_id)
 {
 	struct nvme_ns *ns = q->queuedata;
 	struct nvme_nvm_id *nvme_nvm_id;
-	struct nvme_nvm_command c = {
-		.nvm_identify.opcode = nvme_nvm_admin_identify,
-		.nvm_identify.nsid = cpu_to_le32(ns->ns_id),
-		.nvm_identify.chnl_off = 0,
-	};
+	struct nvme_nvm_command c = {};
 	int ret;
 
+	c.nvm_identify.opcode = nvme_nvm_admin_identify;
+	c.nvm_identify.nsid = cpu_to_le32(ns->ns_id);
+	c.nvm_identify.chnl_off = 0;
 	nvme_nvm_id = kmalloc(4096, GFP_KERNEL);
 	if (!nvme_nvm_id)
 		return -ENOMEM;
@@ -270,14 +269,13 @@ static int nvme_nvm_get_features(struct request_queue *q,
 						struct nvm_get_features *gf)
 {
 	struct nvme_ns *ns = q->queuedata;
-	struct nvme_nvm_command c = {
-		.common.opcode = nvme_nvm_admin_get_features,
-		.common.nsid = ns->ns_id,
-	};
+	struct nvme_nvm_command c = {};
 	int sz = sizeof(struct nvm_get_features);
 	int ret;
 	u64 *resp;
 
+	c.common.opcode = nvme_nvm_admin_get_features;
+	c.common.nsid = ns->ns_id;
 	resp = kmalloc(sz, GFP_KERNEL);
 	if (!resp)
 		return -ENOMEM;
@@ -297,12 +295,11 @@ done:
 static int nvme_nvm_set_resp(struct request_queue *q, u64 resp)
 {
 	struct nvme_ns *ns = q->queuedata;
-	struct nvme_nvm_command c = {
-		.nvm_resp.opcode = nvme_nvm_admin_set_resp,
-		.nvm_resp.nsid = cpu_to_le32(ns->ns_id),
-		.nvm_resp.resp = cpu_to_le64(resp),
-	};
+	struct nvme_nvm_command c = {};
 
+	c.nvm_resp.opcode = nvme_nvm_admin_set_resp;
+	c.nvm_resp.nsid = cpu_to_le32(ns->ns_id);
+	c.nvm_resp.resp = cpu_to_le64(resp);
 	return nvme_submit_sync_cmd(q, (struct nvme_command *)&c, NULL, 0);
 }
 
@@ -311,16 +308,15 @@ static int nvme_nvm_get_l2p_tbl(struct request_queue *q, u64 slba, u64 nlb,
 {
 	struct nvme_ns *ns = q->queuedata;
 	struct nvme_dev *dev = ns->dev;
-	struct nvme_nvm_command c = {
-		.nvm_l2p.opcode = nvme_nvm_admin_get_l2p_tbl,
-		.nvm_l2p.nsid = cpu_to_le32(ns->ns_id),
-	};
+	struct nvme_nvm_command c = {};
 	u32 len = queue_max_hw_sectors(q) << 9;
 	u64 nlb_pr_rq = len / sizeof(u64);
 	u64 cmd_slba = slba;
 	void *entries;
 	int ret = 0;
 
+	c.nvm_l2p.opcode = nvme_nvm_admin_get_l2p_tbl;
+	c.nvm_l2p.nsid = cpu_to_le32(ns->ns_id);
 	entries = kmalloc(len, GFP_KERNEL);
 	if (!entries)
 		return -ENOMEM;
@@ -365,15 +361,14 @@ static int nvme_nvm_get_bb_tbl(struct request_queue *q, int lunid,
 {
 	struct nvme_ns *ns = q->queuedata;
 	struct nvme_dev *dev = ns->dev;
-	struct nvme_nvm_command c = {
-		.nvm_get_bb.opcode = nvme_nvm_admin_get_bb_tbl,
-		.nvm_get_bb.nsid = cpu_to_le32(ns->ns_id),
-		.nvm_get_bb.lbb = cpu_to_le32(lunid),
-	};
+	struct nvme_nvm_command c = {};
 	void *bb_bitmap;
 	u16 bb_bitmap_size;
 	int ret = 0;
 
+	c.nvm_get_bb.opcode = nvme_nvm_admin_get_bb_tbl;
+	c.nvm_get_bb.nsid = cpu_to_le32(ns->ns_id);
+	c.nvm_get_bb.lbb = cpu_to_le32(lunid);
 	bb_bitmap_size = ((nr_blocks >> 15) + 1) * PAGE_SIZE;
 	bb_bitmap = kmalloc(bb_bitmap_size, GFP_KERNEL);
 	if (!bb_bitmap)
@@ -471,12 +466,11 @@ static int nvme_nvm_submit_io(struct request_queue *q, struct nvm_rq *rqd)
 static int nvme_nvm_erase_block(struct request_queue *q, sector_t block_id)
 {
 	struct nvme_ns *ns = q->queuedata;
-	struct nvme_nvm_command c = {
-		.nvm_erase.opcode = nvme_nvm_cmd_erase,
-		.nvm_erase.nsid = cpu_to_le32(ns->ns_id),
-		.nvm_erase.blk_addr = cpu_to_le64(block_id),
-	};
+	struct nvme_nvm_command c = {};
 
+	c.nvm_erase.opcode = nvme_nvm_cmd_erase;
+	c.nvm_erase.nsid = cpu_to_le32(ns->ns_id);
+	c.nvm_erase.blk_addr = cpu_to_le64(block_id);
 	return nvme_submit_sync_cmd(q, (struct nvme_command *)&c, NULL, 0);
 }
 
-- 
1.8.4.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v7 1/5] lightnvm: Support for Open-Channel SSDs
@ 2015-09-04  8:27             ` Dongsheng Yang
  0 siblings, 0 replies; 33+ messages in thread
From: Dongsheng Yang @ 2015-09-04  8:27 UTC (permalink / raw)


On 09/04/2015 04:05 PM, Matias Bj?rling wrote:
>>
>> So here is a suggestion, register_bm again
>> if we found nvm_dev->bm == NULL in create_target(). And if it is still
>> NULL after that. return an error "nvm: no compatible bm was found"
>> and stop target creating. Otherwise, there would be a NULL Pointer
>> reference problem.
>>
>> That's a real problem I met in my testing and I did this change
>> in my local using. I hope that's useful to you.
>>
> Hi Yang,
> ac
> Similar to this?

Okey, I attached two changes in my local using. I hope that
useful to you.

Yang
>
> diff --git i/drivers/lightnvm/core.c w/drivers/lightnvm/core.c
> index 5e4c2b8..0d2e5e3 100644
> --- i/drivers/lightnvm/core.c
> +++ w/drivers/lightnvm/core.c
> @@ -262,8 +262,9 @@ int nvm_init(struct nvm_dev *dev)
>          }
>
>          if (!ret) {
> -               pr_info("nvm: no compatible bm was found.\n");
> -               return 0;
> +               pr_info("nvm: %s was not initialized due to no
> compatible bm.\n",
> +                                                               dev->name);
> +               return -EINVAL;
>          }
>
>          pr_info("nvm: registered %s with luns: %u blocks: %lu sector
> size: %d\n",
>
>
>
> .
>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0002-lightNVM-register-bm-in-nvm_create_target-if-dev-bm-.patch
Type: text/x-patch
Size: 1558 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20150904/62365bf4/attachment.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0001-lightNVM-fix-a-compatibility-problem-in-compiling.patch
Type: text/x-patch
Size: 5428 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20150904/62365bf4/attachment-0001.bin>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v7 1/5] lightnvm: Support for Open-Channel SSDs
  2015-09-04  8:27             ` Dongsheng Yang
@ 2015-09-04  8:49               ` Matias Bjørling
  -1 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-09-04  8:49 UTC (permalink / raw)
  To: Dongsheng Yang, hch, axboe, linux-fsdevel, linux-kernel, linux-nvme
  Cc: jg, Stephen.Bates, keith.busch, Matias Bjørling

On 09/04/2015 10:27 AM, Dongsheng Yang wrote:
> On 09/04/2015 04:05 PM, Matias Bjørling wrote:
>>>
>>> So here is a suggestion, register_bm again
>>> if we found nvm_dev->bm == NULL in create_target(). And if it is still
>>> NULL after that. return an error "nvm: no compatible bm was found"
>>> and stop target creating. Otherwise, there would be a NULL Pointer
>>> reference problem.
>>>
>>> That's a real problem I met in my testing and I did this change
>>> in my local using. I hope that's useful to you.
>>>
>> Hi Yang,
>> ac
>> Similar to this?
>
> Okey, I attached two changes in my local using. I hope that
> useful to you.
>

Thanks! Applied and pushed to master.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH v7 1/5] lightnvm: Support for Open-Channel SSDs
@ 2015-09-04  8:49               ` Matias Bjørling
  0 siblings, 0 replies; 33+ messages in thread
From: Matias Bjørling @ 2015-09-04  8:49 UTC (permalink / raw)


On 09/04/2015 10:27 AM, Dongsheng Yang wrote:
> On 09/04/2015 04:05 PM, Matias Bj?rling wrote:
>>>
>>> So here is a suggestion, register_bm again
>>> if we found nvm_dev->bm == NULL in create_target(). And if it is still
>>> NULL after that. return an error "nvm: no compatible bm was found"
>>> and stop target creating. Otherwise, there would be a NULL Pointer
>>> reference problem.
>>>
>>> That's a real problem I met in my testing and I did this change
>>> in my local using. I hope that's useful to you.
>>>
>> Hi Yang,
>> ac
>> Similar to this?
>
> Okey, I attached two changes in my local using. I hope that
> useful to you.
>

Thanks! Applied and pushed to master.

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2015-09-04  8:49 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-08-07 14:29 [PATCH v7 0/5] Support for Open-Channel SSDs Matias Bjørling
2015-08-07 14:29 ` Matias Bjørling
2015-08-07 14:29 ` [PATCH v7 1/5] lightnvm: " Matias Bjørling
2015-08-07 14:29   ` Matias Bjørling
2015-08-07 14:29   ` Matias Bjørling
2015-09-02  3:50   ` Dongsheng Yang
2015-09-02  3:50     ` Dongsheng Yang
2015-09-02  3:50     ` Dongsheng Yang
2015-09-02 10:48     ` Matias Bjørling
2015-09-02 10:48       ` Matias Bjørling
2015-09-04  7:06       ` Dongsheng Yang
2015-09-04  7:06         ` Dongsheng Yang
2015-09-04  7:06         ` Dongsheng Yang
2015-09-04  8:05         ` Matias Bjørling
2015-09-04  8:05           ` Matias Bjørling
2015-09-04  8:27           ` Dongsheng Yang
2015-09-04  8:27             ` Dongsheng Yang
2015-09-04  8:49             ` Matias Bjørling
2015-09-04  8:49               ` Matias Bjørling
2015-08-07 14:29 ` [PATCH v7 2/5] lightnvm: Hybrid Open-Channel SSD RRPC target Matias Bjørling
2015-08-07 14:29   ` Matias Bjørling
2015-08-07 14:29   ` Matias Bjørling
2015-08-07 14:29 ` [PATCH v7 3/5] lightnvm: Hybrid Open-Channel SSD block manager Matias Bjørling
2015-08-07 14:29   ` Matias Bjørling
2015-08-07 14:29 ` [PATCH v7 4/5] null_nvm: Lightnvm test driver Matias Bjørling
2015-08-07 14:29   ` Matias Bjørling
2015-08-07 14:29 ` [PATCH v7 5/5] nvme: LightNVM support Matias Bjørling
2015-08-07 14:29   ` Matias Bjørling
2015-09-02  3:50 ` [PATCH v7 0/5] Support for Open-Channel SSDs Dongsheng Yang
2015-09-02  3:50   ` Dongsheng Yang
2015-09-02  3:50   ` Dongsheng Yang
2015-09-02 10:59   ` Matias Bjørling
2015-09-02 10:59     ` Matias Bjørling

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.