linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 00/11] Small fixes for LightNVM
@ 2016-06-29 14:41 Matias Bjørling
  2016-06-29 14:41 ` [RFC PATCH 01/11] lightnvm: remove checkpatch warning for unsigned ints Matias Bjørling
                   ` (10 more replies)
  0 siblings, 11 replies; 12+ messages in thread
From: Matias Bjørling @ 2016-06-29 14:41 UTC (permalink / raw)
  To: linux-block, linux-kernel; +Cc: Matias Bjørling

A collection of small fixes destined for the 4.8 kernel.

Most notably is the move of the target management into the general media
manager. The concept of targets is media manager specific, therefore
this should be managed by itself. Core may only pass thetarget requests
along to the appropriate media manager.

Matias Bjørling (11):
  lightnvm: remove checkpatch warning for unsigned ints
  lightnvm: fix checkpatch terse errors
  lightnvm: remove open/close statistics for gennvm
  lightnvm: rename gennvm and update description
  lightnvm: move target mgmt into media mgr
  lightnvm: remove nested lock conflict with mm
  lightnvm: remove unused lists from struct rrpc_block
  lightnvm: remove _unlocked variant of [get/put]_blk
  lightnvm: fix lun offset calculation for mark blk
  lightnvm: make ppa_list const in nvm_set_rqd_list
  lightnvm: make __nvm_submit_ppa static

 drivers/lightnvm/Kconfig  |  10 +-
 drivers/lightnvm/core.c   | 234 +++++++---------------------
 drivers/lightnvm/gennvm.c | 385 +++++++++++++++++++++++++++++-----------------
 drivers/lightnvm/gennvm.h |  10 +-
 drivers/lightnvm/rrpc.c   |  30 +---
 drivers/lightnvm/rrpc.h   |  12 +-
 include/linux/lightnvm.h  |  33 ++--
 7 files changed, 333 insertions(+), 381 deletions(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC PATCH 01/11] lightnvm: remove checkpatch warning for unsigned ints
  2016-06-29 14:41 [RFC PATCH 00/11] Small fixes for LightNVM Matias Bjørling
@ 2016-06-29 14:41 ` Matias Bjørling
  2016-06-29 14:41 ` [RFC PATCH 02/11] lightnvm: fix checkpatch terse errors Matias Bjørling
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Matias Bjørling @ 2016-06-29 14:41 UTC (permalink / raw)
  To: linux-block, linux-kernel; +Cc: Matias Bjørling

Checkpatch found two incidents where the type was preferred to be
written out in full.

./drivers/lightnvm/rrpc.h:184: WARNING: Prefer 'unsigned int' to bare
use of 'unsigned'
./drivers/lightnvm/rrpc.h:209: WARNING: Prefer 'unsigned int' to bare
use of 'unsigned'
./drivers/lightnvm/rrpc.c:51: WARNING: Prefer 'unsigned int' to bare use
of 'unsigned'

Signed-off-by: Matias Bjørling <m@bjorling.me>
---
 drivers/lightnvm/rrpc.c | 2 +-
 drivers/lightnvm/rrpc.h | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c
index 736e669..90c7cb4 100644
--- a/drivers/lightnvm/rrpc.c
+++ b/drivers/lightnvm/rrpc.c
@@ -48,7 +48,7 @@ static void rrpc_page_invalidate(struct rrpc *rrpc, struct rrpc_addr *a)
 }
 
 static void rrpc_invalidate_range(struct rrpc *rrpc, sector_t slba,
-								unsigned len)
+							unsigned int len)
 {
 	sector_t i;
 
diff --git a/drivers/lightnvm/rrpc.h b/drivers/lightnvm/rrpc.h
index 87e84b5..5797343 100644
--- a/drivers/lightnvm/rrpc.h
+++ b/drivers/lightnvm/rrpc.h
@@ -188,7 +188,7 @@ static inline int request_intersects(struct rrpc_inflight_rq *r,
 }
 
 static int __rrpc_lock_laddr(struct rrpc *rrpc, sector_t laddr,
-			     unsigned pages, struct rrpc_inflight_rq *r)
+			     unsigned int pages, struct rrpc_inflight_rq *r)
 {
 	sector_t laddr_end = laddr + pages - 1;
 	struct rrpc_inflight_rq *rtmp;
@@ -213,7 +213,7 @@ static int __rrpc_lock_laddr(struct rrpc *rrpc, sector_t laddr,
 }
 
 static inline int rrpc_lock_laddr(struct rrpc *rrpc, sector_t laddr,
-				 unsigned pages,
+				 unsigned int pages,
 				 struct rrpc_inflight_rq *r)
 {
 	BUG_ON((laddr + pages) > rrpc->nr_sects);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC PATCH 02/11] lightnvm: fix checkpatch terse errors
  2016-06-29 14:41 [RFC PATCH 00/11] Small fixes for LightNVM Matias Bjørling
  2016-06-29 14:41 ` [RFC PATCH 01/11] lightnvm: remove checkpatch warning for unsigned ints Matias Bjørling
@ 2016-06-29 14:41 ` Matias Bjørling
  2016-06-29 14:41 ` [RFC PATCH 03/11] lightnvm: remove open/close statistics for gennvm Matias Bjørling
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Matias Bjørling @ 2016-06-29 14:41 UTC (permalink / raw)
  To: linux-block, linux-kernel; +Cc: Matias Bjørling

A couple of small checkpatch fixups to stop it from complaining.

./drivers/lightnvm/core.c:360: WARNING: line over 80 characters
./drivers/lightnvm/core.c:360: ERROR: trailing statements should be on
next line
./drivers/lightnvm/core.c:503: WARNING: Block comments use a trailing */
on a separate line

Signed-off-by: Matias Bjørling <m@bjorling.me>
---
 drivers/lightnvm/core.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 8beb9c0..0da196f 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -373,7 +373,9 @@ int __nvm_submit_ppa(struct nvm_dev *dev, struct nvm_rq *rqd, int opcode,
 	/* Prevent hang_check timer from firing at us during very long I/O */
 	hang_check = sysctl_hung_task_timeout_secs;
 	if (hang_check)
-		while (!wait_for_completion_io_timeout(&wait, hang_check * (HZ/2)));
+		while (!wait_for_completion_io_timeout(&wait,
+							hang_check * (HZ/2)))
+			;
 	else
 		wait_for_completion_io(&wait);
 
@@ -516,7 +518,8 @@ static int nvm_init_mlc_tbl(struct nvm_dev *dev, struct nvm_id_group *grp)
 	/* The lower page table encoding consists of a list of bytes, where each
 	 * has a lower and an upper half. The first half byte maintains the
 	 * increment value and every value after is an offset added to the
-	 * previous incrementation value */
+	 * previous incrementation value
+	 */
 	dev->lptbl[0] = mlc->pairs[0] & 0xF;
 	for (i = 1; i < dev->lps_per_blk; i++) {
 		p = mlc->pairs[i >> 1];
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC PATCH 03/11] lightnvm: remove open/close statistics for gennvm
  2016-06-29 14:41 [RFC PATCH 00/11] Small fixes for LightNVM Matias Bjørling
  2016-06-29 14:41 ` [RFC PATCH 01/11] lightnvm: remove checkpatch warning for unsigned ints Matias Bjørling
  2016-06-29 14:41 ` [RFC PATCH 02/11] lightnvm: fix checkpatch terse errors Matias Bjørling
@ 2016-06-29 14:41 ` Matias Bjørling
  2016-06-29 14:41 ` [RFC PATCH 04/11] lightnvm: rename gennvm and update description Matias Bjørling
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Matias Bjørling @ 2016-06-29 14:41 UTC (permalink / raw)
  To: linux-block, linux-kernel; +Cc: Matias Bjørling

The responsibility of the media manager is not to keep track of
open/closed blocks. This is better maintained within a target,
that already manages this information on writes.

Remove the statistics and merge the states NVM_BLK_ST_OPEN and
NVM_BLK_ST_CLOSED.

Signed-off-by: Matias Bjørling <m@bjorling.me>
---
 drivers/lightnvm/gennvm.c | 31 ++++++-------------------------
 drivers/lightnvm/rrpc.c   |  5 -----
 include/linux/lightnvm.h  | 15 +++------------
 3 files changed, 9 insertions(+), 42 deletions(-)

diff --git a/drivers/lightnvm/gennvm.c b/drivers/lightnvm/gennvm.c
index ec9fb68..1a3e164 100644
--- a/drivers/lightnvm/gennvm.c
+++ b/drivers/lightnvm/gennvm.c
@@ -122,9 +122,6 @@ static int gennvm_luns_init(struct nvm_dev *dev, struct gen_nvm *gn)
 		lun->vlun.lun_id = i % dev->luns_per_chnl;
 		lun->vlun.chnl_id = i / dev->luns_per_chnl;
 		lun->vlun.nr_free_blocks = dev->blks_per_lun;
-		lun->vlun.nr_open_blocks = 0;
-		lun->vlun.nr_closed_blocks = 0;
-		lun->vlun.nr_bad_blocks = 0;
 	}
 	return 0;
 }
@@ -149,7 +146,6 @@ static int gennvm_block_bb(struct gen_nvm *gn, struct ppa_addr ppa,
 
 		blk = &lun->vlun.blocks[i];
 		list_move_tail(&blk->list, &lun->bb_list);
-		lun->vlun.nr_bad_blocks++;
 		lun->vlun.nr_free_blocks--;
 	}
 
@@ -200,9 +196,8 @@ static int gennvm_block_map(u64 slba, u32 nlb, __le64 *entries, void *private)
 			 * block state. The block is assumed to be open.
 			 */
 			list_move_tail(&blk->list, &lun->used_list);
-			blk->state = NVM_BLK_ST_OPEN;
+			blk->state = NVM_BLK_ST_TGT;
 			lun->vlun.nr_free_blocks--;
-			lun->vlun.nr_open_blocks++;
 		}
 	}
 
@@ -346,11 +341,10 @@ static struct nvm_block *gennvm_get_blk_unlocked(struct nvm_dev *dev,
 		goto out;
 
 	blk = list_first_entry(&lun->free_list, struct nvm_block, list);
+
 	list_move_tail(&blk->list, &lun->used_list);
-	blk->state = NVM_BLK_ST_OPEN;
-
+	blk->state = NVM_BLK_ST_TGT;
 	lun->vlun.nr_free_blocks--;
-	lun->vlun.nr_open_blocks++;
 
 out:
 	return blk;
@@ -374,27 +368,18 @@ static void gennvm_put_blk_unlocked(struct nvm_dev *dev, struct nvm_block *blk)
 
 	assert_spin_locked(&vlun->lock);
 
-	if (blk->state & NVM_BLK_ST_OPEN) {
+	if (blk->state & NVM_BLK_ST_TGT) {
 		list_move_tail(&blk->list, &lun->free_list);
-		lun->vlun.nr_open_blocks--;
-		lun->vlun.nr_free_blocks++;
-		blk->state = NVM_BLK_ST_FREE;
-	} else if (blk->state & NVM_BLK_ST_CLOSED) {
-		list_move_tail(&blk->list, &lun->free_list);
-		lun->vlun.nr_closed_blocks--;
 		lun->vlun.nr_free_blocks++;
 		blk->state = NVM_BLK_ST_FREE;
 	} else if (blk->state & NVM_BLK_ST_BAD) {
 		list_move_tail(&blk->list, &lun->bb_list);
-		lun->vlun.nr_bad_blocks++;
 		blk->state = NVM_BLK_ST_BAD;
 	} else {
 		WARN_ON_ONCE(1);
 		pr_err("gennvm: erroneous block type (%lu -> %u)\n",
 							blk->id, blk->state);
 		list_move_tail(&blk->list, &lun->bb_list);
-		lun->vlun.nr_bad_blocks++;
-		blk->state = NVM_BLK_ST_BAD;
 	}
 }
 
@@ -516,12 +501,8 @@ static void gennvm_lun_info_print(struct nvm_dev *dev)
 	gennvm_for_each_lun(gn, lun, i) {
 		spin_lock(&lun->vlun.lock);
 
-		pr_info("%s: lun%8u\t%u\t%u\t%u\t%u\n",
-				dev->name, i,
-				lun->vlun.nr_free_blocks,
-				lun->vlun.nr_open_blocks,
-				lun->vlun.nr_closed_blocks,
-				lun->vlun.nr_bad_blocks);
+		pr_info("%s: lun%8u\t%u\n", dev->name, i,
+						lun->vlun.nr_free_blocks);
 
 		spin_unlock(&lun->vlun.lock);
 	}
diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c
index 90c7cb4..c3a6f34 100644
--- a/drivers/lightnvm/rrpc.c
+++ b/drivers/lightnvm/rrpc.c
@@ -512,17 +512,12 @@ static void rrpc_gc_queue(struct work_struct *work)
 	struct rrpc_block *rblk = gcb->rblk;
 	struct rrpc_lun *rlun = rblk->rlun;
 	struct nvm_lun *lun = rblk->parent->lun;
-	struct nvm_block *blk = rblk->parent;
 
 	spin_lock(&rlun->lock);
 	list_add_tail(&rblk->prio, &rlun->prio_list);
 	spin_unlock(&rlun->lock);
 
 	spin_lock(&lun->lock);
-	lun->nr_open_blocks--;
-	lun->nr_closed_blocks++;
-	blk->state &= ~NVM_BLK_ST_OPEN;
-	blk->state |= NVM_BLK_ST_CLOSED;
 	list_move_tail(&rblk->list, &rlun->closed_list);
 	spin_unlock(&lun->lock);
 
diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
index cee4c8d..8b51d57 100644
--- a/include/linux/lightnvm.h
+++ b/include/linux/lightnvm.h
@@ -269,24 +269,15 @@ struct nvm_lun {
 	int lun_id;
 	int chnl_id;
 
-	/* It is up to the target to mark blocks as closed. If the target does
-	 * not do it, all blocks are marked as open, and nr_open_blocks
-	 * represents the number of blocks in use
-	 */
-	unsigned int nr_open_blocks;	/* Number of used, writable blocks */
-	unsigned int nr_closed_blocks;	/* Number of used, read-only blocks */
+	spinlock_t lock;
+
 	unsigned int nr_free_blocks;	/* Number of unused blocks */
-	unsigned int nr_bad_blocks;	/* Number of bad blocks */
-
-	spinlock_t lock;
-
 	struct nvm_block *blocks;
 };
 
 enum {
 	NVM_BLK_ST_FREE =	0x1,	/* Free block */
-	NVM_BLK_ST_OPEN =	0x2,	/* Open block - read-write */
-	NVM_BLK_ST_CLOSED =	0x4,	/* Closed block - read-only */
+	NVM_BLK_ST_TGT =	0x2,	/* Block in use by target */
 	NVM_BLK_ST_BAD =	0x8,	/* Bad block */
 };
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC PATCH 04/11] lightnvm: rename gennvm and update description
  2016-06-29 14:41 [RFC PATCH 00/11] Small fixes for LightNVM Matias Bjørling
                   ` (2 preceding siblings ...)
  2016-06-29 14:41 ` [RFC PATCH 03/11] lightnvm: remove open/close statistics for gennvm Matias Bjørling
@ 2016-06-29 14:41 ` Matias Bjørling
  2016-06-29 14:41 ` [RFC PATCH 05/11] lightnvm: move target mgmt into media mgr Matias Bjørling
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Matias Bjørling @ 2016-06-29 14:41 UTC (permalink / raw)
  To: linux-block, linux-kernel; +Cc: Matias Bjørling

The generic manager should be called the general media manager, and
instead of using the rather long name of "gennvm" in front of each data
structures, use "gen" instead to shorten it. Update the description of
the media manager as well to make the media manager purpose clearer.

Signed-off-by: Matias Bjørling <m@bjorling.me>
---
 drivers/lightnvm/Kconfig  |  10 ++-
 drivers/lightnvm/gennvm.c | 184 +++++++++++++++++++++++-----------------------
 drivers/lightnvm/gennvm.h |   7 +-
 3 files changed, 102 insertions(+), 99 deletions(-)

diff --git a/drivers/lightnvm/Kconfig b/drivers/lightnvm/Kconfig
index 85a3390..61c68a1 100644
--- a/drivers/lightnvm/Kconfig
+++ b/drivers/lightnvm/Kconfig
@@ -27,11 +27,13 @@ config NVM_DEBUG
 	It is required to create/remove targets without IOCTLs.
 
 config NVM_GENNVM
-	tristate "Generic NVM manager for Open-Channel SSDs"
+	tristate "General Non-Volatile Memory Manager for Open-Channel SSDs"
 	---help---
-	NVM media manager for Open-Channel SSDs that offload management
-	functionality to device, while keeping data placement and garbage
-	collection decisions on the host.
+	Non-volatile memory media manager for Open-Channel SSDs that implements
+	physical media metadata management and block provisioning API.
+
+	This is the standard media manager for using Open-Channel SSDs, and
+	required for targets to be instantiated.
 
 config NVM_RRPC
 	tristate "Round-robin Hybrid Open-Channel SSD target"
diff --git a/drivers/lightnvm/gennvm.c b/drivers/lightnvm/gennvm.c
index 1a3e164..3d2762f 100644
--- a/drivers/lightnvm/gennvm.c
+++ b/drivers/lightnvm/gennvm.c
@@ -15,22 +15,22 @@
  * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139,
  * USA.
  *
- * Implementation of a generic nvm manager for Open-Channel SSDs.
+ * Implementation of a general nvm manager for Open-Channel SSDs.
  */
 
 #include "gennvm.h"
 
-static int gennvm_get_area(struct nvm_dev *dev, sector_t *lba, sector_t len)
+static int gen_get_area(struct nvm_dev *dev, sector_t *lba, sector_t len)
 {
-	struct gen_nvm *gn = dev->mp;
-	struct gennvm_area *area, *prev, *next;
+	struct gen_dev *gn = dev->mp;
+	struct gen_area *area, *prev, *next;
 	sector_t begin = 0;
 	sector_t max_sectors = (dev->sec_size * dev->total_secs) >> 9;
 
 	if (len > max_sectors)
 		return -EINVAL;
 
-	area = kmalloc(sizeof(struct gennvm_area), GFP_KERNEL);
+	area = kmalloc(sizeof(struct gen_area), GFP_KERNEL);
 	if (!area)
 		return -ENOMEM;
 
@@ -64,10 +64,10 @@ static int gennvm_get_area(struct nvm_dev *dev, sector_t *lba, sector_t len)
 	return 0;
 }
 
-static void gennvm_put_area(struct nvm_dev *dev, sector_t begin)
+static void gen_put_area(struct nvm_dev *dev, sector_t begin)
 {
-	struct gen_nvm *gn = dev->mp;
-	struct gennvm_area *area;
+	struct gen_dev *gn = dev->mp;
+	struct gen_area *area;
 
 	spin_lock(&dev->lock);
 	list_for_each_entry(area, &gn->area_list, list) {
@@ -82,27 +82,27 @@ static void gennvm_put_area(struct nvm_dev *dev, sector_t begin)
 	spin_unlock(&dev->lock);
 }
 
-static void gennvm_blocks_free(struct nvm_dev *dev)
+static void gen_blocks_free(struct nvm_dev *dev)
 {
-	struct gen_nvm *gn = dev->mp;
+	struct gen_dev *gn = dev->mp;
 	struct gen_lun *lun;
 	int i;
 
-	gennvm_for_each_lun(gn, lun, i) {
+	gen_for_each_lun(gn, lun, i) {
 		if (!lun->vlun.blocks)
 			break;
 		vfree(lun->vlun.blocks);
 	}
 }
 
-static void gennvm_luns_free(struct nvm_dev *dev)
+static void gen_luns_free(struct nvm_dev *dev)
 {
-	struct gen_nvm *gn = dev->mp;
+	struct gen_dev *gn = dev->mp;
 
 	kfree(gn->luns);
 }
 
-static int gennvm_luns_init(struct nvm_dev *dev, struct gen_nvm *gn)
+static int gen_luns_init(struct nvm_dev *dev, struct gen_dev *gn)
 {
 	struct gen_lun *lun;
 	int i;
@@ -111,7 +111,7 @@ static int gennvm_luns_init(struct nvm_dev *dev, struct gen_nvm *gn)
 	if (!gn->luns)
 		return -ENOMEM;
 
-	gennvm_for_each_lun(gn, lun, i) {
+	gen_for_each_lun(gn, lun, i) {
 		spin_lock_init(&lun->vlun.lock);
 		INIT_LIST_HEAD(&lun->free_list);
 		INIT_LIST_HEAD(&lun->used_list);
@@ -126,7 +126,7 @@ static int gennvm_luns_init(struct nvm_dev *dev, struct gen_nvm *gn)
 	return 0;
 }
 
-static int gennvm_block_bb(struct gen_nvm *gn, struct ppa_addr ppa,
+static int gen_block_bb(struct gen_dev *gn, struct ppa_addr ppa,
 							u8 *blks, int nr_blks)
 {
 	struct nvm_dev *dev = gn->dev;
@@ -152,10 +152,10 @@ static int gennvm_block_bb(struct gen_nvm *gn, struct ppa_addr ppa,
 	return 0;
 }
 
-static int gennvm_block_map(u64 slba, u32 nlb, __le64 *entries, void *private)
+static int gen_block_map(u64 slba, u32 nlb, __le64 *entries, void *private)
 {
 	struct nvm_dev *dev = private;
-	struct gen_nvm *gn = dev->mp;
+	struct gen_dev *gn = dev->mp;
 	u64 elba = slba + nlb;
 	struct gen_lun *lun;
 	struct nvm_block *blk;
@@ -163,7 +163,7 @@ static int gennvm_block_map(u64 slba, u32 nlb, __le64 *entries, void *private)
 	int lun_id;
 
 	if (unlikely(elba > dev->total_secs)) {
-		pr_err("gennvm: L2P data from device is out of bounds!\n");
+		pr_err("gen: L2P data from device is out of bounds!\n");
 		return -EINVAL;
 	}
 
@@ -171,7 +171,7 @@ static int gennvm_block_map(u64 slba, u32 nlb, __le64 *entries, void *private)
 		u64 pba = le64_to_cpu(entries[i]);
 
 		if (unlikely(pba >= dev->total_secs && pba != U64_MAX)) {
-			pr_err("gennvm: L2P data entry is out of bounds!\n");
+			pr_err("gen: L2P data entry is out of bounds!\n");
 			return -EINVAL;
 		}
 
@@ -204,7 +204,7 @@ static int gennvm_block_map(u64 slba, u32 nlb, __le64 *entries, void *private)
 	return 0;
 }
 
-static int gennvm_blocks_init(struct nvm_dev *dev, struct gen_nvm *gn)
+static int gen_blocks_init(struct nvm_dev *dev, struct gen_dev *gn)
 {
 	struct gen_lun *lun;
 	struct nvm_block *block;
@@ -217,7 +217,7 @@ static int gennvm_blocks_init(struct nvm_dev *dev, struct gen_nvm *gn)
 	if (!blks)
 		return -ENOMEM;
 
-	gennvm_for_each_lun(gn, lun, lun_iter) {
+	gen_for_each_lun(gn, lun, lun_iter) {
 		lun->vlun.blocks = vzalloc(sizeof(struct nvm_block) *
 							dev->blks_per_lun);
 		if (!lun->vlun.blocks) {
@@ -251,20 +251,20 @@ static int gennvm_blocks_init(struct nvm_dev *dev, struct gen_nvm *gn)
 
 			ret = nvm_get_bb_tbl(dev, ppa, blks);
 			if (ret)
-				pr_err("gennvm: could not get BB table\n");
+				pr_err("gen: could not get BB table\n");
 
-			ret = gennvm_block_bb(gn, ppa, blks, nr_blks);
+			ret = gen_block_bb(gn, ppa, blks, nr_blks);
 			if (ret)
-				pr_err("gennvm: BB table map failed\n");
+				pr_err("gen: BB table map failed\n");
 		}
 	}
 
 	if ((dev->identity.dom & NVM_RSP_L2P) && dev->ops->get_l2p_tbl) {
 		ret = dev->ops->get_l2p_tbl(dev, 0, dev->total_secs,
-							gennvm_block_map, dev);
+							gen_block_map, dev);
 		if (ret) {
-			pr_err("gennvm: could not read L2P table.\n");
-			pr_warn("gennvm: default block initialization");
+			pr_err("gen: could not read L2P table.\n");
+			pr_warn("gen: default block initialization");
 		}
 	}
 
@@ -272,23 +272,23 @@ static int gennvm_blocks_init(struct nvm_dev *dev, struct gen_nvm *gn)
 	return 0;
 }
 
-static void gennvm_free(struct nvm_dev *dev)
+static void gen_free(struct nvm_dev *dev)
 {
-	gennvm_blocks_free(dev);
-	gennvm_luns_free(dev);
+	gen_blocks_free(dev);
+	gen_luns_free(dev);
 	kfree(dev->mp);
 	dev->mp = NULL;
 }
 
-static int gennvm_register(struct nvm_dev *dev)
+static int gen_register(struct nvm_dev *dev)
 {
-	struct gen_nvm *gn;
+	struct gen_dev *gn;
 	int ret;
 
 	if (!try_module_get(THIS_MODULE))
 		return -ENODEV;
 
-	gn = kzalloc(sizeof(struct gen_nvm), GFP_KERNEL);
+	gn = kzalloc(sizeof(struct gen_dev), GFP_KERNEL);
 	if (!gn)
 		return -ENOMEM;
 
@@ -297,32 +297,32 @@ static int gennvm_register(struct nvm_dev *dev)
 	INIT_LIST_HEAD(&gn->area_list);
 	dev->mp = gn;
 
-	ret = gennvm_luns_init(dev, gn);
+	ret = gen_luns_init(dev, gn);
 	if (ret) {
-		pr_err("gennvm: could not initialize luns\n");
+		pr_err("gen: could not initialize luns\n");
 		goto err;
 	}
 
-	ret = gennvm_blocks_init(dev, gn);
+	ret = gen_blocks_init(dev, gn);
 	if (ret) {
-		pr_err("gennvm: could not initialize blocks\n");
+		pr_err("gen: could not initialize blocks\n");
 		goto err;
 	}
 
 	return 1;
 err:
-	gennvm_free(dev);
+	gen_free(dev);
 	module_put(THIS_MODULE);
 	return ret;
 }
 
-static void gennvm_unregister(struct nvm_dev *dev)
+static void gen_unregister(struct nvm_dev *dev)
 {
-	gennvm_free(dev);
+	gen_free(dev);
 	module_put(THIS_MODULE);
 }
 
-static struct nvm_block *gennvm_get_blk_unlocked(struct nvm_dev *dev,
+static struct nvm_block *gen_get_blk_unlocked(struct nvm_dev *dev,
 				struct nvm_lun *vlun, unsigned long flags)
 {
 	struct gen_lun *lun = container_of(vlun, struct gen_lun, vlun);
@@ -332,7 +332,7 @@ static struct nvm_block *gennvm_get_blk_unlocked(struct nvm_dev *dev,
 	assert_spin_locked(&vlun->lock);
 
 	if (list_empty(&lun->free_list)) {
-		pr_err_ratelimited("gennvm: lun %u have no free pages available",
+		pr_err_ratelimited("gen: lun %u have no free pages available",
 								lun->vlun.id);
 		goto out;
 	}
@@ -350,18 +350,18 @@ out:
 	return blk;
 }
 
-static struct nvm_block *gennvm_get_blk(struct nvm_dev *dev,
+static struct nvm_block *gen_get_blk(struct nvm_dev *dev,
 				struct nvm_lun *vlun, unsigned long flags)
 {
 	struct nvm_block *blk;
 
 	spin_lock(&vlun->lock);
-	blk = gennvm_get_blk_unlocked(dev, vlun, flags);
+	blk = gen_get_blk_unlocked(dev, vlun, flags);
 	spin_unlock(&vlun->lock);
 	return blk;
 }
 
-static void gennvm_put_blk_unlocked(struct nvm_dev *dev, struct nvm_block *blk)
+static void gen_put_blk_unlocked(struct nvm_dev *dev, struct nvm_block *blk)
 {
 	struct nvm_lun *vlun = blk->lun;
 	struct gen_lun *lun = container_of(vlun, struct gen_lun, vlun);
@@ -377,35 +377,35 @@ static void gennvm_put_blk_unlocked(struct nvm_dev *dev, struct nvm_block *blk)
 		blk->state = NVM_BLK_ST_BAD;
 	} else {
 		WARN_ON_ONCE(1);
-		pr_err("gennvm: erroneous block type (%lu -> %u)\n",
+		pr_err("gen: erroneous block type (%lu -> %u)\n",
 							blk->id, blk->state);
 		list_move_tail(&blk->list, &lun->bb_list);
 	}
 }
 
-static void gennvm_put_blk(struct nvm_dev *dev, struct nvm_block *blk)
+static void gen_put_blk(struct nvm_dev *dev, struct nvm_block *blk)
 {
 	struct nvm_lun *vlun = blk->lun;
 
 	spin_lock(&vlun->lock);
-	gennvm_put_blk_unlocked(dev, blk);
+	gen_put_blk_unlocked(dev, blk);
 	spin_unlock(&vlun->lock);
 }
 
-static void gennvm_mark_blk(struct nvm_dev *dev, struct ppa_addr ppa, int type)
+static void gen_mark_blk(struct nvm_dev *dev, struct ppa_addr ppa, int type)
 {
-	struct gen_nvm *gn = dev->mp;
+	struct gen_dev *gn = dev->mp;
 	struct gen_lun *lun;
 	struct nvm_block *blk;
 
-	pr_debug("gennvm: ppa  (ch: %u lun: %u blk: %u pg: %u) -> %u\n",
+	pr_debug("gen: ppa  (ch: %u lun: %u blk: %u pg: %u) -> %u\n",
 			ppa.g.ch, ppa.g.lun, ppa.g.blk, ppa.g.pg, type);
 
 	if (unlikely(ppa.g.ch > dev->nr_chnls ||
 					ppa.g.lun > dev->luns_per_chnl ||
 					ppa.g.blk > dev->blks_per_lun)) {
 		WARN_ON_ONCE(1);
-		pr_err("gennvm: ppa broken (ch: %u > %u lun: %u > %u blk: %u > %u",
+		pr_err("gen: ppa broken (ch: %u > %u lun: %u > %u blk: %u > %u",
 				ppa.g.ch, dev->nr_chnls,
 				ppa.g.lun, dev->luns_per_chnl,
 				ppa.g.blk, dev->blks_per_lun);
@@ -420,9 +420,9 @@ static void gennvm_mark_blk(struct nvm_dev *dev, struct ppa_addr ppa, int type)
 }
 
 /*
- * mark block bad in gennvm. It is expected that the target recovers separately
+ * mark block bad in gen. It is expected that the target recovers separately
  */
-static void gennvm_mark_blk_bad(struct nvm_dev *dev, struct nvm_rq *rqd)
+static void gen_mark_blk_bad(struct nvm_dev *dev, struct nvm_rq *rqd)
 {
 	int bit = -1;
 	int max_secs = dev->ops->max_phys_sect;
@@ -432,25 +432,25 @@ static void gennvm_mark_blk_bad(struct nvm_dev *dev, struct nvm_rq *rqd)
 
 	/* look up blocks and mark them as bad */
 	if (rqd->nr_ppas == 1) {
-		gennvm_mark_blk(dev, rqd->ppa_addr, NVM_BLK_ST_BAD);
+		gen_mark_blk(dev, rqd->ppa_addr, NVM_BLK_ST_BAD);
 		return;
 	}
 
 	while ((bit = find_next_bit(comp_bits, max_secs, bit + 1)) < max_secs)
-		gennvm_mark_blk(dev, rqd->ppa_list[bit], NVM_BLK_ST_BAD);
+		gen_mark_blk(dev, rqd->ppa_list[bit], NVM_BLK_ST_BAD);
 }
 
-static void gennvm_end_io(struct nvm_rq *rqd)
+static void gen_end_io(struct nvm_rq *rqd)
 {
 	struct nvm_tgt_instance *ins = rqd->ins;
 
 	if (rqd->error == NVM_RSP_ERR_FAILWRITE)
-		gennvm_mark_blk_bad(rqd->dev, rqd);
+		gen_mark_blk_bad(rqd->dev, rqd);
 
 	ins->tt->end_io(rqd);
 }
 
-static int gennvm_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd)
+static int gen_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd)
 {
 	if (!dev->ops->submit_io)
 		return -ENODEV;
@@ -459,11 +459,11 @@ static int gennvm_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd)
 	nvm_generic_to_addr_mode(dev, rqd);
 
 	rqd->dev = dev;
-	rqd->end_io = gennvm_end_io;
+	rqd->end_io = gen_end_io;
 	return dev->ops->submit_io(dev, rqd);
 }
 
-static int gennvm_erase_blk(struct nvm_dev *dev, struct nvm_block *blk,
+static int gen_erase_blk(struct nvm_dev *dev, struct nvm_block *blk,
 							unsigned long flags)
 {
 	struct ppa_addr addr = block_to_ppa(dev, blk);
@@ -471,19 +471,19 @@ static int gennvm_erase_blk(struct nvm_dev *dev, struct nvm_block *blk,
 	return nvm_erase_ppa(dev, &addr, 1);
 }
 
-static int gennvm_reserve_lun(struct nvm_dev *dev, int lunid)
+static int gen_reserve_lun(struct nvm_dev *dev, int lunid)
 {
 	return test_and_set_bit(lunid, dev->lun_map);
 }
 
-static void gennvm_release_lun(struct nvm_dev *dev, int lunid)
+static void gen_release_lun(struct nvm_dev *dev, int lunid)
 {
 	WARN_ON(!test_and_clear_bit(lunid, dev->lun_map));
 }
 
-static struct nvm_lun *gennvm_get_lun(struct nvm_dev *dev, int lunid)
+static struct nvm_lun *gen_get_lun(struct nvm_dev *dev, int lunid)
 {
-	struct gen_nvm *gn = dev->mp;
+	struct gen_dev *gn = dev->mp;
 
 	if (unlikely(lunid >= dev->nr_luns))
 		return NULL;
@@ -491,14 +491,14 @@ static struct nvm_lun *gennvm_get_lun(struct nvm_dev *dev, int lunid)
 	return &gn->luns[lunid].vlun;
 }
 
-static void gennvm_lun_info_print(struct nvm_dev *dev)
+static void gen_lun_info_print(struct nvm_dev *dev)
 {
-	struct gen_nvm *gn = dev->mp;
+	struct gen_dev *gn = dev->mp;
 	struct gen_lun *lun;
 	unsigned int i;
 
 
-	gennvm_for_each_lun(gn, lun, i) {
+	gen_for_each_lun(gn, lun, i) {
 		spin_lock(&lun->vlun.lock);
 
 		pr_info("%s: lun%8u\t%u\n", dev->name, i,
@@ -508,45 +508,45 @@ static void gennvm_lun_info_print(struct nvm_dev *dev)
 	}
 }
 
-static struct nvmm_type gennvm = {
+static struct nvmm_type gen = {
 	.name			= "gennvm",
 	.version		= {0, 1, 0},
 
-	.register_mgr		= gennvm_register,
-	.unregister_mgr		= gennvm_unregister,
+	.register_mgr		= gen_register,
+	.unregister_mgr		= gen_unregister,
 
-	.get_blk_unlocked	= gennvm_get_blk_unlocked,
-	.put_blk_unlocked	= gennvm_put_blk_unlocked,
+	.get_blk_unlocked	= gen_get_blk_unlocked,
+	.put_blk_unlocked	= gen_put_blk_unlocked,
 
-	.get_blk		= gennvm_get_blk,
-	.put_blk		= gennvm_put_blk,
+	.get_blk		= gen_get_blk,
+	.put_blk		= gen_put_blk,
 
-	.submit_io		= gennvm_submit_io,
-	.erase_blk		= gennvm_erase_blk,
+	.submit_io		= gen_submit_io,
+	.erase_blk		= gen_erase_blk,
 
-	.mark_blk		= gennvm_mark_blk,
+	.mark_blk		= gen_mark_blk,
 
-	.get_lun		= gennvm_get_lun,
-	.reserve_lun		= gennvm_reserve_lun,
-	.release_lun		= gennvm_release_lun,
-	.lun_info_print		= gennvm_lun_info_print,
+	.get_lun		= gen_get_lun,
+	.reserve_lun		= gen_reserve_lun,
+	.release_lun		= gen_release_lun,
+	.lun_info_print		= gen_lun_info_print,
 
-	.get_area		= gennvm_get_area,
-	.put_area		= gennvm_put_area,
+	.get_area		= gen_get_area,
+	.put_area		= gen_put_area,
 
 };
 
-static int __init gennvm_module_init(void)
+static int __init gen_module_init(void)
 {
-	return nvm_register_mgr(&gennvm);
+	return nvm_register_mgr(&gen);
 }
 
-static void gennvm_module_exit(void)
+static void gen_module_exit(void)
 {
-	nvm_unregister_mgr(&gennvm);
+	nvm_unregister_mgr(&gen);
 }
 
-module_init(gennvm_module_init);
-module_exit(gennvm_module_exit);
+module_init(gen_module_init);
+module_exit(gen_module_exit);
 MODULE_LICENSE("GPL v2");
-MODULE_DESCRIPTION("Generic media manager for Open-Channel SSDs");
+MODULE_DESCRIPTION("General media manager for Open-Channel SSDs");
diff --git a/drivers/lightnvm/gennvm.h b/drivers/lightnvm/gennvm.h
index 04d7c23..bf06219 100644
--- a/drivers/lightnvm/gennvm.h
+++ b/drivers/lightnvm/gennvm.h
@@ -34,7 +34,7 @@ struct gen_lun {
 					 */
 };
 
-struct gen_nvm {
+struct gen_dev {
 	struct nvm_dev *dev;
 
 	int nr_luns;
@@ -42,12 +42,13 @@ struct gen_nvm {
 	struct list_head area_list;
 };
 
-struct gennvm_area {
+struct gen_area {
 	struct list_head list;
 	sector_t begin;
 	sector_t end;	/* end is excluded */
 };
-#define gennvm_for_each_lun(bm, lun, i) \
+
+#define gen_for_each_lun(bm, lun, i) \
 		for ((i) = 0, lun = &(bm)->luns[0]; \
 			(i) < (bm)->nr_luns; (i)++, lun = &(bm)->luns[(i)])
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC PATCH 05/11] lightnvm: move target mgmt into media mgr
  2016-06-29 14:41 [RFC PATCH 00/11] Small fixes for LightNVM Matias Bjørling
                   ` (3 preceding siblings ...)
  2016-06-29 14:41 ` [RFC PATCH 04/11] lightnvm: rename gennvm and update description Matias Bjørling
@ 2016-06-29 14:41 ` Matias Bjørling
  2016-06-29 14:41 ` [RFC PATCH 06/11] lightnvm: remove nested lock conflict with mm Matias Bjørling
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Matias Bjørling @ 2016-06-29 14:41 UTC (permalink / raw)
  To: linux-block, linux-kernel; +Cc: Matias Bjørling

To enable persistent block management to easily control creation and
removal of targets, we move target management into the media
manager. The LightNVM core continues to maintain which target types are
registered, while the media manager now keeps track of its initialized
targets.

Two new callbacks for the media manager are introduced. create_tgt and
remove_tgt. Note that remove_tgt returns 0 on successfully removing a
target, and returns 1 if the target was not found.

Signed-off-by: Matias Bjørling <m@bjorling.me>
---
 drivers/lightnvm/core.c   | 198 +++++++++-------------------------------------
 drivers/lightnvm/gennvm.c | 154 ++++++++++++++++++++++++++++++++++++
 drivers/lightnvm/gennvm.h |   3 +
 include/linux/lightnvm.h  |  10 +++
 4 files changed, 206 insertions(+), 159 deletions(-)

diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 0da196f..8afb04c 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -18,8 +18,6 @@
  *
  */
 
-#include <linux/blkdev.h>
-#include <linux/blk-mq.h>
 #include <linux/list.h>
 #include <linux/types.h>
 #include <linux/sem.h>
@@ -28,42 +26,37 @@
 #include <linux/miscdevice.h>
 #include <linux/lightnvm.h>
 #include <linux/sched/sysctl.h>
-#include <uapi/linux/lightnvm.h>
 
 static LIST_HEAD(nvm_tgt_types);
 static LIST_HEAD(nvm_mgrs);
 static LIST_HEAD(nvm_devices);
-static LIST_HEAD(nvm_targets);
 static DECLARE_RWSEM(nvm_lock);
 
-static struct nvm_target *nvm_find_target(const char *name)
+struct nvm_tgt_type *nvm_find_target_type(const char *name, int lock)
 {
-	struct nvm_target *tgt;
+	struct nvm_tgt_type *tmp, *tt = NULL;
 
-	list_for_each_entry(tgt, &nvm_targets, list)
-		if (!strcmp(name, tgt->disk->disk_name))
-			return tgt;
+	if (lock)
+		down_write(&nvm_lock);
 
-	return NULL;
-}
-
-static struct nvm_tgt_type *nvm_find_target_type(const char *name)
-{
-	struct nvm_tgt_type *tt;
-
-	list_for_each_entry(tt, &nvm_tgt_types, list)
-		if (!strcmp(name, tt->name))
-			return tt;
+	list_for_each_entry(tmp, &nvm_tgt_types, list)
+		if (!strcmp(name, tmp->name)) {
+			tt = tmp;
+			break;
+		}
 
-	return NULL;
+	if (lock)
+		up_write(&nvm_lock);
+	return tt;
 }
+EXPORT_SYMBOL(nvm_find_target_type);
 
 int nvm_register_tgt_type(struct nvm_tgt_type *tt)
 {
 	int ret = 0;
 
 	down_write(&nvm_lock);
-	if (nvm_find_target_type(tt->name))
+	if (nvm_find_target_type(tt->name, 0))
 		ret = -EEXIST;
 	else
 		list_add(&tt->list, &nvm_tgt_types);
@@ -605,42 +598,11 @@ err_fmtype:
 	return ret;
 }
 
-static void nvm_remove_target(struct nvm_target *t)
-{
-	struct nvm_tgt_type *tt = t->type;
-	struct gendisk *tdisk = t->disk;
-	struct request_queue *q = tdisk->queue;
-
-	lockdep_assert_held(&nvm_lock);
-
-	del_gendisk(tdisk);
-	blk_cleanup_queue(q);
-
-	if (tt->exit)
-		tt->exit(tdisk->private_data);
-
-	put_disk(tdisk);
-
-	list_del(&t->list);
-	kfree(t);
-}
-
 static void nvm_free_mgr(struct nvm_dev *dev)
 {
-	struct nvm_target *tgt, *tmp;
-
 	if (!dev->mt)
 		return;
 
-	down_write(&nvm_lock);
-	list_for_each_entry_safe(tgt, tmp, &nvm_targets, list) {
-		if (tgt->dev != dev)
-			continue;
-
-		nvm_remove_target(tgt);
-	}
-	up_write(&nvm_lock);
-
 	dev->mt->unregister_mgr(dev);
 	dev->mt = NULL;
 }
@@ -787,91 +749,6 @@ void nvm_unregister(char *disk_name)
 }
 EXPORT_SYMBOL(nvm_unregister);
 
-static const struct block_device_operations nvm_fops = {
-	.owner		= THIS_MODULE,
-};
-
-static int nvm_create_target(struct nvm_dev *dev,
-						struct nvm_ioctl_create *create)
-{
-	struct nvm_ioctl_create_simple *s = &create->conf.s;
-	struct request_queue *tqueue;
-	struct gendisk *tdisk;
-	struct nvm_tgt_type *tt;
-	struct nvm_target *t;
-	void *targetdata;
-
-	if (!dev->mt) {
-		pr_info("nvm: device has no media manager registered.\n");
-		return -ENODEV;
-	}
-
-	down_write(&nvm_lock);
-	tt = nvm_find_target_type(create->tgttype);
-	if (!tt) {
-		pr_err("nvm: target type %s not found\n", create->tgttype);
-		up_write(&nvm_lock);
-		return -EINVAL;
-	}
-
-	t = nvm_find_target(create->tgtname);
-	if (t) {
-		pr_err("nvm: target name already exists.\n");
-		up_write(&nvm_lock);
-		return -EINVAL;
-	}
-	up_write(&nvm_lock);
-
-	t = kmalloc(sizeof(struct nvm_target), GFP_KERNEL);
-	if (!t)
-		return -ENOMEM;
-
-	tqueue = blk_alloc_queue_node(GFP_KERNEL, dev->q->node);
-	if (!tqueue)
-		goto err_t;
-	blk_queue_make_request(tqueue, tt->make_rq);
-
-	tdisk = alloc_disk(0);
-	if (!tdisk)
-		goto err_queue;
-
-	sprintf(tdisk->disk_name, "%s", create->tgtname);
-	tdisk->flags = GENHD_FL_EXT_DEVT;
-	tdisk->major = 0;
-	tdisk->first_minor = 0;
-	tdisk->fops = &nvm_fops;
-	tdisk->queue = tqueue;
-
-	targetdata = tt->init(dev, tdisk, s->lun_begin, s->lun_end);
-	if (IS_ERR(targetdata))
-		goto err_init;
-
-	tdisk->private_data = targetdata;
-	tqueue->queuedata = targetdata;
-
-	blk_queue_max_hw_sectors(tqueue, 8 * dev->ops->max_phys_sect);
-
-	set_capacity(tdisk, tt->capacity(targetdata));
-	add_disk(tdisk);
-
-	t->type = tt;
-	t->disk = tdisk;
-	t->dev = dev;
-
-	down_write(&nvm_lock);
-	list_add_tail(&t->list, &nvm_targets);
-	up_write(&nvm_lock);
-
-	return 0;
-err_init:
-	put_disk(tdisk);
-err_queue:
-	blk_cleanup_queue(tqueue);
-err_t:
-	kfree(t);
-	return -ENOMEM;
-}
-
 static int __nvm_configure_create(struct nvm_ioctl_create *create)
 {
 	struct nvm_dev *dev;
@@ -880,11 +757,17 @@ static int __nvm_configure_create(struct nvm_ioctl_create *create)
 	down_write(&nvm_lock);
 	dev = nvm_find_nvm_dev(create->dev);
 	up_write(&nvm_lock);
+
 	if (!dev) {
 		pr_err("nvm: device not found\n");
 		return -EINVAL;
 	}
 
+	if (!dev->mt) {
+		pr_info("nvm: device has no media manager registered.\n");
+		return -ENODEV;
+	}
+
 	if (create->conf.type != NVM_CONFIG_TYPE_SIMPLE) {
 		pr_err("nvm: config type not valid\n");
 		return -EINVAL;
@@ -897,25 +780,7 @@ static int __nvm_configure_create(struct nvm_ioctl_create *create)
 		return -EINVAL;
 	}
 
-	return nvm_create_target(dev, create);
-}
-
-static int __nvm_configure_remove(struct nvm_ioctl_remove *remove)
-{
-	struct nvm_target *t;
-
-	down_write(&nvm_lock);
-	t = nvm_find_target(remove->tgtname);
-	if (!t) {
-		pr_err("nvm: target \"%s\" doesn't exist.\n", remove->tgtname);
-		up_write(&nvm_lock);
-		return -EINVAL;
-	}
-
-	nvm_remove_target(t);
-	up_write(&nvm_lock);
-
-	return 0;
+	return dev->mt->create_tgt(dev, create);
 }
 
 #ifdef CONFIG_NVM_DEBUG
@@ -950,8 +815,9 @@ static int nvm_configure_show(const char *val)
 static int nvm_configure_remove(const char *val)
 {
 	struct nvm_ioctl_remove remove;
+	struct nvm_dev *dev;
 	char opcode;
-	int ret;
+	int ret = 0;
 
 	ret = sscanf(val, "%c %256s", &opcode, remove.tgtname);
 	if (ret != 2) {
@@ -961,7 +827,13 @@ static int nvm_configure_remove(const char *val)
 
 	remove.flags = 0;
 
-	return __nvm_configure_remove(&remove);
+	list_for_each_entry(dev, &nvm_devices, devices) {
+		ret = dev->mt->remove_tgt(dev, &remove);
+		if (!ret)
+			break;
+	}
+
+	return ret;
 }
 
 static int nvm_configure_create(const char *val)
@@ -1158,6 +1030,8 @@ static long nvm_ioctl_dev_create(struct file *file, void __user *arg)
 static long nvm_ioctl_dev_remove(struct file *file, void __user *arg)
 {
 	struct nvm_ioctl_remove remove;
+	struct nvm_dev *dev;
+	int ret = 0;
 
 	if (!capable(CAP_SYS_ADMIN))
 		return -EPERM;
@@ -1172,7 +1046,13 @@ static long nvm_ioctl_dev_remove(struct file *file, void __user *arg)
 		return -EINVAL;
 	}
 
-	return __nvm_configure_remove(&remove);
+	list_for_each_entry(dev, &nvm_devices, devices) {
+		ret = dev->mt->remove_tgt(dev, &remove);
+		if (!ret)
+			break;
+	}
+
+	return ret;
 }
 
 static void nvm_setup_nvm_sb_info(struct nvm_sb_info *info)
diff --git a/drivers/lightnvm/gennvm.c b/drivers/lightnvm/gennvm.c
index 3d2762f..41760b2 100644
--- a/drivers/lightnvm/gennvm.c
+++ b/drivers/lightnvm/gennvm.c
@@ -20,6 +20,144 @@
 
 #include "gennvm.h"
 
+static struct nvm_target *gen_find_target(struct gen_dev *gn, const char *name)
+{
+	struct nvm_target *tgt;
+
+	list_for_each_entry(tgt, &gn->targets, list)
+		if (!strcmp(name, tgt->disk->disk_name))
+			return tgt;
+
+	return NULL;
+}
+
+static const struct block_device_operations gen_fops = {
+	.owner		= THIS_MODULE,
+};
+
+static int gen_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create)
+{
+	struct gen_dev *gn = dev->mp;
+	struct nvm_ioctl_create_simple *s = &create->conf.s;
+	struct request_queue *tqueue;
+	struct gendisk *tdisk;
+	struct nvm_tgt_type *tt;
+	struct nvm_target *t;
+	void *targetdata;
+
+	tt = nvm_find_target_type(create->tgttype, 1);
+	if (!tt) {
+		pr_err("nvm: target type %s not found\n", create->tgttype);
+		return -EINVAL;
+	}
+
+	mutex_lock(&gn->lock);
+	t = gen_find_target(gn, create->tgtname);
+	if (t) {
+		pr_err("nvm: target name already exists.\n");
+		mutex_unlock(&gn->lock);
+		return -EINVAL;
+	}
+	mutex_unlock(&gn->lock);
+
+	t = kmalloc(sizeof(struct nvm_target), GFP_KERNEL);
+	if (!t)
+		return -ENOMEM;
+
+	tqueue = blk_alloc_queue_node(GFP_KERNEL, dev->q->node);
+	if (!tqueue)
+		goto err_t;
+	blk_queue_make_request(tqueue, tt->make_rq);
+
+	tdisk = alloc_disk(0);
+	if (!tdisk)
+		goto err_queue;
+
+	sprintf(tdisk->disk_name, "%s", create->tgtname);
+	tdisk->flags = GENHD_FL_EXT_DEVT;
+	tdisk->major = 0;
+	tdisk->first_minor = 0;
+	tdisk->fops = &gen_fops;
+	tdisk->queue = tqueue;
+
+	targetdata = tt->init(dev, tdisk, s->lun_begin, s->lun_end);
+	if (IS_ERR(targetdata))
+		goto err_init;
+
+	tdisk->private_data = targetdata;
+	tqueue->queuedata = targetdata;
+
+	blk_queue_max_hw_sectors(tqueue, 8 * dev->ops->max_phys_sect);
+
+	set_capacity(tdisk, tt->capacity(targetdata));
+	add_disk(tdisk);
+
+	t->type = tt;
+	t->disk = tdisk;
+	t->dev = dev;
+
+	mutex_lock(&gn->lock);
+	list_add_tail(&t->list, &gn->targets);
+	mutex_unlock(&gn->lock);
+
+	return 0;
+err_init:
+	put_disk(tdisk);
+err_queue:
+	blk_cleanup_queue(tqueue);
+err_t:
+	kfree(t);
+	return -ENOMEM;
+}
+
+static void __gen_remove_target(struct nvm_target *t)
+{
+	struct nvm_tgt_type *tt = t->type;
+	struct gendisk *tdisk = t->disk;
+	struct request_queue *q = tdisk->queue;
+
+	del_gendisk(tdisk);
+	blk_cleanup_queue(q);
+
+	if (tt->exit)
+		tt->exit(tdisk->private_data);
+
+	put_disk(tdisk);
+
+	list_del(&t->list);
+	kfree(t);
+}
+
+/**
+ * gen_remove_tgt - Removes a target from the media manager
+ * @dev:	device
+ * @remove:	ioctl structure with target name to remove.
+ *
+ * Returns:
+ * 0: on success
+ * 1: on not found
+ * <0: on error
+ */
+static int gen_remove_tgt(struct nvm_dev *dev, struct nvm_ioctl_remove *remove)
+{
+	struct gen_dev *gn = dev->mp;
+	struct nvm_target *t;
+
+	if (!gn)
+		return 1;
+
+	mutex_lock(&gn->lock);
+	t = gen_find_target(gn, remove->tgtname);
+	if (!t) {
+		mutex_unlock(&gn->lock);
+		return 1;
+	}
+	__gen_remove_target(t);
+	mutex_unlock(&gn->lock);
+
+	return 0;
+}
+
 static int gen_get_area(struct nvm_dev *dev, sector_t *lba, sector_t len)
 {
 	struct gen_dev *gn = dev->mp;
@@ -295,6 +433,8 @@ static int gen_register(struct nvm_dev *dev)
 	gn->dev = dev;
 	gn->nr_luns = dev->nr_luns;
 	INIT_LIST_HEAD(&gn->area_list);
+	mutex_init(&gn->lock);
+	INIT_LIST_HEAD(&gn->targets);
 	dev->mp = gn;
 
 	ret = gen_luns_init(dev, gn);
@@ -318,6 +458,17 @@ err:
 
 static void gen_unregister(struct nvm_dev *dev)
 {
+	struct gen_dev *gn = dev->mp;
+	struct nvm_target *t, *tmp;
+
+	mutex_lock(&gn->lock);
+	list_for_each_entry_safe(t, tmp, &gn->targets, list) {
+		if (t->dev != dev)
+			continue;
+		__gen_remove_target(t);
+	}
+	mutex_unlock(&gn->lock);
+
 	gen_free(dev);
 	module_put(THIS_MODULE);
 }
@@ -515,6 +666,9 @@ static struct nvmm_type gen = {
 	.register_mgr		= gen_register,
 	.unregister_mgr		= gen_unregister,
 
+	.create_tgt		= gen_create_tgt,
+	.remove_tgt		= gen_remove_tgt,
+
 	.get_blk_unlocked	= gen_get_blk_unlocked,
 	.put_blk_unlocked	= gen_put_blk_unlocked,
 
diff --git a/drivers/lightnvm/gennvm.h b/drivers/lightnvm/gennvm.h
index bf06219..8ecfa81 100644
--- a/drivers/lightnvm/gennvm.h
+++ b/drivers/lightnvm/gennvm.h
@@ -40,6 +40,9 @@ struct gen_dev {
 	int nr_luns;
 	struct gen_lun *luns;
 	struct list_head area_list;
+
+	struct mutex lock;
+	struct list_head targets;
 };
 
 struct gen_area {
diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
index 8b51d57..d619f6d 100644
--- a/include/linux/lightnvm.h
+++ b/include/linux/lightnvm.h
@@ -1,7 +1,9 @@
 #ifndef NVM_H
 #define NVM_H
 
+#include <linux/blkdev.h>
 #include <linux/types.h>
+#include <uapi/linux/lightnvm.h>
 
 enum {
 	NVM_IO_OK = 0,
@@ -447,6 +449,8 @@ struct nvm_tgt_type {
 	struct list_head list;
 };
 
+extern struct nvm_tgt_type *nvm_find_target_type(const char *, int);
+
 extern int nvm_register_tgt_type(struct nvm_tgt_type *);
 extern void nvm_unregister_tgt_type(struct nvm_tgt_type *);
 
@@ -455,6 +459,9 @@ extern void nvm_dev_dma_free(struct nvm_dev *, void *, dma_addr_t);
 
 typedef int (nvmm_register_fn)(struct nvm_dev *);
 typedef void (nvmm_unregister_fn)(struct nvm_dev *);
+
+typedef int (nvmm_create_tgt_fn)(struct nvm_dev *, struct nvm_ioctl_create *);
+typedef int (nvmm_remove_tgt_fn)(struct nvm_dev *, struct nvm_ioctl_remove *);
 typedef struct nvm_block *(nvmm_get_blk_fn)(struct nvm_dev *,
 					      struct nvm_lun *, unsigned long);
 typedef void (nvmm_put_blk_fn)(struct nvm_dev *, struct nvm_block *);
@@ -480,6 +487,9 @@ struct nvmm_type {
 	nvmm_register_fn *register_mgr;
 	nvmm_unregister_fn *unregister_mgr;
 
+	nvmm_create_tgt_fn *create_tgt;
+	nvmm_remove_tgt_fn *remove_tgt;
+
 	/* Block administration callbacks */
 	nvmm_get_blk_fn *get_blk_unlocked;
 	nvmm_put_blk_fn *put_blk_unlocked;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC PATCH 06/11] lightnvm: remove nested lock conflict with mm
  2016-06-29 14:41 [RFC PATCH 00/11] Small fixes for LightNVM Matias Bjørling
                   ` (4 preceding siblings ...)
  2016-06-29 14:41 ` [RFC PATCH 05/11] lightnvm: move target mgmt into media mgr Matias Bjørling
@ 2016-06-29 14:41 ` Matias Bjørling
  2016-06-29 14:41 ` [RFC PATCH 07/11] lightnvm: remove unused lists from struct rrpc_block Matias Bjørling
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Matias Bjørling @ 2016-06-29 14:41 UTC (permalink / raw)
  To: linux-block, linux-kernel; +Cc: Matias Bjørling

If a media manager tries to initialize it targets upon media manager
initialization, the media manager will need to know which target types
are available in LightNVM. The lists of which managers and target types
are available shares the same lock.

Therefore, on initialization, the nvm_lock is taken by LightNVM core,
which later leads to a deadlock when target types are enumerated by the
media manager.

Add an exclusive lock for target types to resolve this conflict.

Signed-off-by: Matias Bjørling <m@bjorling.me>
---
 drivers/lightnvm/core.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 8afb04c..04469e0 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -28,6 +28,7 @@
 #include <linux/sched/sysctl.h>
 
 static LIST_HEAD(nvm_tgt_types);
+static DECLARE_RWSEM(nvm_tgtt_lock);
 static LIST_HEAD(nvm_mgrs);
 static LIST_HEAD(nvm_devices);
 static DECLARE_RWSEM(nvm_lock);
@@ -37,7 +38,7 @@ struct nvm_tgt_type *nvm_find_target_type(const char *name, int lock)
 	struct nvm_tgt_type *tmp, *tt = NULL;
 
 	if (lock)
-		down_write(&nvm_lock);
+		down_write(&nvm_tgtt_lock);
 
 	list_for_each_entry(tmp, &nvm_tgt_types, list)
 		if (!strcmp(name, tmp->name)) {
@@ -46,7 +47,7 @@ struct nvm_tgt_type *nvm_find_target_type(const char *name, int lock)
 		}
 
 	if (lock)
-		up_write(&nvm_lock);
+		up_write(&nvm_tgtt_lock);
 	return tt;
 }
 EXPORT_SYMBOL(nvm_find_target_type);
@@ -55,12 +56,12 @@ int nvm_register_tgt_type(struct nvm_tgt_type *tt)
 {
 	int ret = 0;
 
-	down_write(&nvm_lock);
+	down_write(&nvm_tgtt_lock);
 	if (nvm_find_target_type(tt->name, 0))
 		ret = -EEXIST;
 	else
 		list_add(&tt->list, &nvm_tgt_types);
-	up_write(&nvm_lock);
+	up_write(&nvm_tgtt_lock);
 
 	return ret;
 }
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC PATCH 07/11] lightnvm: remove unused lists from struct rrpc_block
  2016-06-29 14:41 [RFC PATCH 00/11] Small fixes for LightNVM Matias Bjørling
                   ` (5 preceding siblings ...)
  2016-06-29 14:41 ` [RFC PATCH 06/11] lightnvm: remove nested lock conflict with mm Matias Bjørling
@ 2016-06-29 14:41 ` Matias Bjørling
  2016-06-29 14:41 ` [RFC PATCH 08/11] lightnvm: remove _unlocked variant of [get/put]_blk Matias Bjørling
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Matias Bjørling @ 2016-06-29 14:41 UTC (permalink / raw)
  To: linux-block, linux-kernel; +Cc: Matias Bjørling

The ->list, ->open_list, and ->closed_list lists were previously used
for statistics. However, their usage have been removed, and thus these
can safely be removed.

Signed-off-by: Matias Bjørling <m@bjorling.me>
---
 drivers/lightnvm/rrpc.c | 9 ---------
 drivers/lightnvm/rrpc.h | 8 --------
 2 files changed, 17 deletions(-)

diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c
index c3a6f34..10ed22b 100644
--- a/drivers/lightnvm/rrpc.c
+++ b/drivers/lightnvm/rrpc.c
@@ -205,7 +205,6 @@ static struct rrpc_block *rrpc_get_blk(struct rrpc *rrpc, struct rrpc_lun *rlun,
 	}
 
 	rblk = rrpc_get_rblk(rlun, blk->id);
-	list_add_tail(&rblk->list, &rlun->open_list);
 	spin_unlock(&lun->lock);
 
 	blk->priv = rblk;
@@ -224,7 +223,6 @@ static void rrpc_put_blk(struct rrpc *rrpc, struct rrpc_block *rblk)
 
 	spin_lock(&lun->lock);
 	nvm_put_blk_unlocked(rrpc->dev, rblk->parent);
-	list_del(&rblk->list);
 	spin_unlock(&lun->lock);
 }
 
@@ -511,16 +509,11 @@ static void rrpc_gc_queue(struct work_struct *work)
 	struct rrpc *rrpc = gcb->rrpc;
 	struct rrpc_block *rblk = gcb->rblk;
 	struct rrpc_lun *rlun = rblk->rlun;
-	struct nvm_lun *lun = rblk->parent->lun;
 
 	spin_lock(&rlun->lock);
 	list_add_tail(&rblk->prio, &rlun->prio_list);
 	spin_unlock(&rlun->lock);
 
-	spin_lock(&lun->lock);
-	list_move_tail(&rblk->list, &rlun->closed_list);
-	spin_unlock(&lun->lock);
-
 	mempool_free(gcb, rrpc->gcb_pool);
 	pr_debug("nvm: block '%lu' is full, allow GC (sched)\n",
 							rblk->parent->id);
@@ -1194,8 +1187,6 @@ static int rrpc_luns_init(struct rrpc *rrpc, int lun_begin, int lun_end)
 
 		rlun->rrpc = rrpc;
 		INIT_LIST_HEAD(&rlun->prio_list);
-		INIT_LIST_HEAD(&rlun->open_list);
-		INIT_LIST_HEAD(&rlun->closed_list);
 
 		INIT_WORK(&rlun->ws_gc, rrpc_lun_gc);
 		spin_lock_init(&rlun->lock);
diff --git a/drivers/lightnvm/rrpc.h b/drivers/lightnvm/rrpc.h
index 5797343..448e39a 100644
--- a/drivers/lightnvm/rrpc.h
+++ b/drivers/lightnvm/rrpc.h
@@ -56,7 +56,6 @@ struct rrpc_block {
 	struct nvm_block *parent;
 	struct rrpc_lun *rlun;
 	struct list_head prio;
-	struct list_head list;
 
 #define MAX_INVALID_PAGES_STORAGE 8
 	/* Bitmap for invalid page intries */
@@ -77,13 +76,6 @@ struct rrpc_lun {
 	struct rrpc_block *blocks;	/* Reference to block allocation */
 
 	struct list_head prio_list;	/* Blocks that may be GC'ed */
-	struct list_head open_list;	/* In-use open blocks. These are blocks
-					 * that can be both written to and read
-					 * from
-					 */
-	struct list_head closed_list;	/* In-use closed blocks. These are
-					 * blocks that can _only_ be read from
-					 */
 
 	struct work_struct ws_gc;
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC PATCH 08/11] lightnvm: remove _unlocked variant of [get/put]_blk
  2016-06-29 14:41 [RFC PATCH 00/11] Small fixes for LightNVM Matias Bjørling
                   ` (6 preceding siblings ...)
  2016-06-29 14:41 ` [RFC PATCH 07/11] lightnvm: remove unused lists from struct rrpc_block Matias Bjørling
@ 2016-06-29 14:41 ` Matias Bjørling
  2016-06-29 14:41 ` [RFC PATCH 09/11] lightnvm: fix lun offset calculation for mark blk Matias Bjørling
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Matias Bjørling @ 2016-06-29 14:41 UTC (permalink / raw)
  To: linux-block, linux-kernel; +Cc: Matias Bjørling

The [get/put]_blk API enables targets to get ownership of blocks at
runtime. This information is currently not recorded on disk, and the
information is therefore lost on power failure. To restore the
metadata, the [get/put]_blk must persist its metadata. In that case,
we need to control the outer lock, so that we can disable them while
updating the on-disk metadata. Fortunately, the _unlocked versions can
be removed, which allows us to move the lock into the [get/put]_blk
functions.

Signed-off-by: Matias Bjørling <m@bjorling.me>
---
 drivers/lightnvm/core.c   | 14 --------------
 drivers/lightnvm/gennvm.c | 32 ++++----------------------------
 drivers/lightnvm/rrpc.c   | 14 ++------------
 include/linux/lightnvm.h  |  6 ------
 4 files changed, 6 insertions(+), 60 deletions(-)

diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 04469e0..ddc8098 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -176,20 +176,6 @@ static struct nvm_dev *nvm_find_nvm_dev(const char *name)
 	return NULL;
 }
 
-struct nvm_block *nvm_get_blk_unlocked(struct nvm_dev *dev, struct nvm_lun *lun,
-							unsigned long flags)
-{
-	return dev->mt->get_blk_unlocked(dev, lun, flags);
-}
-EXPORT_SYMBOL(nvm_get_blk_unlocked);
-
-/* Assumes that all valid pages have already been moved on release to bm */
-void nvm_put_blk_unlocked(struct nvm_dev *dev, struct nvm_block *blk)
-{
-	return dev->mt->put_blk_unlocked(dev, blk);
-}
-EXPORT_SYMBOL(nvm_put_blk_unlocked);
-
 struct nvm_block *nvm_get_blk(struct nvm_dev *dev, struct nvm_lun *lun,
 							unsigned long flags)
 {
diff --git a/drivers/lightnvm/gennvm.c b/drivers/lightnvm/gennvm.c
index 41760b2..c65fb67 100644
--- a/drivers/lightnvm/gennvm.c
+++ b/drivers/lightnvm/gennvm.c
@@ -473,15 +473,14 @@ static void gen_unregister(struct nvm_dev *dev)
 	module_put(THIS_MODULE);
 }
 
-static struct nvm_block *gen_get_blk_unlocked(struct nvm_dev *dev,
+static struct nvm_block *gen_get_blk(struct nvm_dev *dev,
 				struct nvm_lun *vlun, unsigned long flags)
 {
 	struct gen_lun *lun = container_of(vlun, struct gen_lun, vlun);
 	struct nvm_block *blk = NULL;
 	int is_gc = flags & NVM_IOTYPE_GC;
 
-	assert_spin_locked(&vlun->lock);
-
+	spin_lock(&vlun->lock);
 	if (list_empty(&lun->free_list)) {
 		pr_err_ratelimited("gen: lun %u have no free pages available",
 								lun->vlun.id);
@@ -496,29 +495,17 @@ static struct nvm_block *gen_get_blk_unlocked(struct nvm_dev *dev,
 	list_move_tail(&blk->list, &lun->used_list);
 	blk->state = NVM_BLK_ST_TGT;
 	lun->vlun.nr_free_blocks--;
-
 out:
-	return blk;
-}
-
-static struct nvm_block *gen_get_blk(struct nvm_dev *dev,
-				struct nvm_lun *vlun, unsigned long flags)
-{
-	struct nvm_block *blk;
-
-	spin_lock(&vlun->lock);
-	blk = gen_get_blk_unlocked(dev, vlun, flags);
 	spin_unlock(&vlun->lock);
 	return blk;
 }
 
-static void gen_put_blk_unlocked(struct nvm_dev *dev, struct nvm_block *blk)
+static void gen_put_blk(struct nvm_dev *dev, struct nvm_block *blk)
 {
 	struct nvm_lun *vlun = blk->lun;
 	struct gen_lun *lun = container_of(vlun, struct gen_lun, vlun);
 
-	assert_spin_locked(&vlun->lock);
-
+	spin_lock(&vlun->lock);
 	if (blk->state & NVM_BLK_ST_TGT) {
 		list_move_tail(&blk->list, &lun->free_list);
 		lun->vlun.nr_free_blocks++;
@@ -532,14 +519,6 @@ static void gen_put_blk_unlocked(struct nvm_dev *dev, struct nvm_block *blk)
 							blk->id, blk->state);
 		list_move_tail(&blk->list, &lun->bb_list);
 	}
-}
-
-static void gen_put_blk(struct nvm_dev *dev, struct nvm_block *blk)
-{
-	struct nvm_lun *vlun = blk->lun;
-
-	spin_lock(&vlun->lock);
-	gen_put_blk_unlocked(dev, blk);
 	spin_unlock(&vlun->lock);
 }
 
@@ -669,9 +648,6 @@ static struct nvmm_type gen = {
 	.create_tgt		= gen_create_tgt,
 	.remove_tgt		= gen_remove_tgt,
 
-	.get_blk_unlocked	= gen_get_blk_unlocked,
-	.put_blk_unlocked	= gen_put_blk_unlocked,
-
 	.get_blk		= gen_get_blk,
 	.put_blk		= gen_put_blk,
 
diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c
index 10ed22b..fa8d5be 100644
--- a/drivers/lightnvm/rrpc.c
+++ b/drivers/lightnvm/rrpc.c
@@ -192,21 +192,16 @@ static void rrpc_set_lun_cur(struct rrpc_lun *rlun, struct rrpc_block *rblk)
 static struct rrpc_block *rrpc_get_blk(struct rrpc *rrpc, struct rrpc_lun *rlun,
 							unsigned long flags)
 {
-	struct nvm_lun *lun = rlun->parent;
 	struct nvm_block *blk;
 	struct rrpc_block *rblk;
 
-	spin_lock(&lun->lock);
-	blk = nvm_get_blk_unlocked(rrpc->dev, rlun->parent, flags);
+	blk = nvm_get_blk(rrpc->dev, rlun->parent, flags);
 	if (!blk) {
 		pr_err("nvm: rrpc: cannot get new block from media manager\n");
-		spin_unlock(&lun->lock);
 		return NULL;
 	}
 
 	rblk = rrpc_get_rblk(rlun, blk->id);
-	spin_unlock(&lun->lock);
-
 	blk->priv = rblk;
 	bitmap_zero(rblk->invalid_pages, rrpc->dev->sec_per_blk);
 	rblk->next_page = 0;
@@ -218,12 +213,7 @@ static struct rrpc_block *rrpc_get_blk(struct rrpc *rrpc, struct rrpc_lun *rlun,
 
 static void rrpc_put_blk(struct rrpc *rrpc, struct rrpc_block *rblk)
 {
-	struct rrpc_lun *rlun = rblk->rlun;
-	struct nvm_lun *lun = rlun->parent;
-
-	spin_lock(&lun->lock);
-	nvm_put_blk_unlocked(rrpc->dev, rblk->parent);
-	spin_unlock(&lun->lock);
+	nvm_put_blk(rrpc->dev, rblk->parent);
 }
 
 static void rrpc_put_blks(struct rrpc *rrpc)
diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
index d619f6d..e9836cf 100644
--- a/include/linux/lightnvm.h
+++ b/include/linux/lightnvm.h
@@ -491,8 +491,6 @@ struct nvmm_type {
 	nvmm_remove_tgt_fn *remove_tgt;
 
 	/* Block administration callbacks */
-	nvmm_get_blk_fn *get_blk_unlocked;
-	nvmm_put_blk_fn *put_blk_unlocked;
 	nvmm_get_blk_fn *get_blk;
 	nvmm_put_blk_fn *put_blk;
 	nvmm_open_blk_fn *open_blk;
@@ -522,10 +520,6 @@ struct nvmm_type {
 extern int nvm_register_mgr(struct nvmm_type *);
 extern void nvm_unregister_mgr(struct nvmm_type *);
 
-extern struct nvm_block *nvm_get_blk_unlocked(struct nvm_dev *,
-					struct nvm_lun *, unsigned long);
-extern void nvm_put_blk_unlocked(struct nvm_dev *, struct nvm_block *);
-
 extern struct nvm_block *nvm_get_blk(struct nvm_dev *, struct nvm_lun *,
 								unsigned long);
 extern void nvm_put_blk(struct nvm_dev *, struct nvm_block *);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC PATCH 09/11] lightnvm: fix lun offset calculation for mark blk
  2016-06-29 14:41 [RFC PATCH 00/11] Small fixes for LightNVM Matias Bjørling
                   ` (7 preceding siblings ...)
  2016-06-29 14:41 ` [RFC PATCH 08/11] lightnvm: remove _unlocked variant of [get/put]_blk Matias Bjørling
@ 2016-06-29 14:41 ` Matias Bjørling
  2016-06-29 14:41 ` [RFC PATCH 10/11] lightnvm: make ppa_list const in nvm_set_rqd_list Matias Bjørling
  2016-06-29 14:41 ` [RFC PATCH 11/11] lightnvm: make __nvm_submit_ppa static Matias Bjørling
  10 siblings, 0 replies; 12+ messages in thread
From: Matias Bjørling @ 2016-06-29 14:41 UTC (permalink / raw)
  To: linux-block, linux-kernel; +Cc: Matias Bjørling

The gen_mark_blk_bad function marks the wrong block when a block is on
a different channel. Fix the index calculation, so that it updates the
correct block.

Reported-by: Javier Gonzalez <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <m@bjorling.me>
---
 drivers/lightnvm/gennvm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/lightnvm/gennvm.c b/drivers/lightnvm/gennvm.c
index c65fb67..b74174c 100644
--- a/drivers/lightnvm/gennvm.c
+++ b/drivers/lightnvm/gennvm.c
@@ -542,7 +542,7 @@ static void gen_mark_blk(struct nvm_dev *dev, struct ppa_addr ppa, int type)
 		return;
 	}
 
-	lun = &gn->luns[ppa.g.lun * ppa.g.ch];
+	lun = &gn->luns[(dev->luns_per_chnl * ppa.g.ch) + ppa.g.lun];
 	blk = &lun->vlun.blocks[ppa.g.blk];
 
 	/* will be moved to bb list on put_blk from target */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC PATCH 10/11] lightnvm: make ppa_list const in nvm_set_rqd_list
  2016-06-29 14:41 [RFC PATCH 00/11] Small fixes for LightNVM Matias Bjørling
                   ` (8 preceding siblings ...)
  2016-06-29 14:41 ` [RFC PATCH 09/11] lightnvm: fix lun offset calculation for mark blk Matias Bjørling
@ 2016-06-29 14:41 ` Matias Bjørling
  2016-06-29 14:41 ` [RFC PATCH 11/11] lightnvm: make __nvm_submit_ppa static Matias Bjørling
  10 siblings, 0 replies; 12+ messages in thread
From: Matias Bjørling @ 2016-06-29 14:41 UTC (permalink / raw)
  To: linux-block, linux-kernel; +Cc: Matias Bjørling

The passed by reference ppa list in nvm_set_rqd_list() is updated when
multiple planes are available. In that case, each PPA plane is
incremented when the device side PPA list is created. This prevents the
caller to rely on the PPA list to be unmodified after a call.

Signed-off-by: Matias Bjørling <m@bjorling.me>
---
 drivers/lightnvm/core.c  | 8 +++++---
 include/linux/lightnvm.h | 2 +-
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index ddc8098..00b64f7 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -237,9 +237,10 @@ void nvm_generic_to_addr_mode(struct nvm_dev *dev, struct nvm_rq *rqd)
 EXPORT_SYMBOL(nvm_generic_to_addr_mode);
 
 int nvm_set_rqd_ppalist(struct nvm_dev *dev, struct nvm_rq *rqd,
-				struct ppa_addr *ppas, int nr_ppas, int vblk)
+			const struct ppa_addr *ppas, int nr_ppas, int vblk)
 {
 	int i, plane_cnt, pl_idx;
+	struct ppa_addr ppa;
 
 	if ((!vblk || dev->plane_mode == NVM_PLANE_SINGLE) && nr_ppas == 1) {
 		rqd->nr_ppas = nr_ppas;
@@ -264,8 +265,9 @@ int nvm_set_rqd_ppalist(struct nvm_dev *dev, struct nvm_rq *rqd,
 
 		for (i = 0; i < nr_ppas; i++) {
 			for (pl_idx = 0; pl_idx < plane_cnt; pl_idx++) {
-				ppas[i].g.pl = pl_idx;
-				rqd->ppa_list[(pl_idx * nr_ppas) + i] = ppas[i];
+				ppa = ppas[i];
+				ppa.g.pl = pl_idx;
+				rqd->ppa_list[(pl_idx * nr_ppas) + i] = ppa;
 			}
 		}
 	}
diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
index e9836cf..ba78b83 100644
--- a/include/linux/lightnvm.h
+++ b/include/linux/lightnvm.h
@@ -534,7 +534,7 @@ extern int nvm_submit_io(struct nvm_dev *, struct nvm_rq *);
 extern void nvm_generic_to_addr_mode(struct nvm_dev *, struct nvm_rq *);
 extern void nvm_addr_to_generic_mode(struct nvm_dev *, struct nvm_rq *);
 extern int nvm_set_rqd_ppalist(struct nvm_dev *, struct nvm_rq *,
-						struct ppa_addr *, int, int);
+					const struct ppa_addr *, int, int);
 extern void nvm_free_rqd_ppalist(struct nvm_dev *, struct nvm_rq *);
 extern int nvm_erase_ppa(struct nvm_dev *, struct ppa_addr *, int);
 extern int nvm_erase_blk(struct nvm_dev *, struct nvm_block *);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC PATCH 11/11] lightnvm: make __nvm_submit_ppa static
  2016-06-29 14:41 [RFC PATCH 00/11] Small fixes for LightNVM Matias Bjørling
                   ` (9 preceding siblings ...)
  2016-06-29 14:41 ` [RFC PATCH 10/11] lightnvm: make ppa_list const in nvm_set_rqd_list Matias Bjørling
@ 2016-06-29 14:41 ` Matias Bjørling
  10 siblings, 0 replies; 12+ messages in thread
From: Matias Bjørling @ 2016-06-29 14:41 UTC (permalink / raw)
  To: linux-block, linux-kernel; +Cc: Matias Bjørling

The __nvm_submit_ppa() function is not used outside lightnvm core.

Signed-off-by: Matias Bjørling <m@bjorling.me>
---
 drivers/lightnvm/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 00b64f7..9ebd2cf 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -325,7 +325,7 @@ static void nvm_end_io_sync(struct nvm_rq *rqd)
 	complete(waiting);
 }
 
-int __nvm_submit_ppa(struct nvm_dev *dev, struct nvm_rq *rqd, int opcode,
+static int __nvm_submit_ppa(struct nvm_dev *dev, struct nvm_rq *rqd, int opcode,
 						int flags, void *buf, int len)
 {
 	DECLARE_COMPLETION_ONSTACK(wait);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2016-06-29 14:45 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-29 14:41 [RFC PATCH 00/11] Small fixes for LightNVM Matias Bjørling
2016-06-29 14:41 ` [RFC PATCH 01/11] lightnvm: remove checkpatch warning for unsigned ints Matias Bjørling
2016-06-29 14:41 ` [RFC PATCH 02/11] lightnvm: fix checkpatch terse errors Matias Bjørling
2016-06-29 14:41 ` [RFC PATCH 03/11] lightnvm: remove open/close statistics for gennvm Matias Bjørling
2016-06-29 14:41 ` [RFC PATCH 04/11] lightnvm: rename gennvm and update description Matias Bjørling
2016-06-29 14:41 ` [RFC PATCH 05/11] lightnvm: move target mgmt into media mgr Matias Bjørling
2016-06-29 14:41 ` [RFC PATCH 06/11] lightnvm: remove nested lock conflict with mm Matias Bjørling
2016-06-29 14:41 ` [RFC PATCH 07/11] lightnvm: remove unused lists from struct rrpc_block Matias Bjørling
2016-06-29 14:41 ` [RFC PATCH 08/11] lightnvm: remove _unlocked variant of [get/put]_blk Matias Bjørling
2016-06-29 14:41 ` [RFC PATCH 09/11] lightnvm: fix lun offset calculation for mark blk Matias Bjørling
2016-06-29 14:41 ` [RFC PATCH 10/11] lightnvm: make ppa_list const in nvm_set_rqd_list Matias Bjørling
2016-06-29 14:41 ` [RFC PATCH 11/11] lightnvm: make __nvm_submit_ppa static Matias Bjørling

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).