linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2] UBI: add ubi_lnum_purge function to clear work queue for a lnum
@ 2012-05-15  7:44 Joel Reardon
  2012-05-15 11:40 ` Artem Bityutskiy
  2012-05-20 11:10 ` [PATCH V3] UBI: modify ubi_wl_flush " Joel Reardon
  0 siblings, 2 replies; 8+ messages in thread
From: Joel Reardon @ 2012-05-15  7:44 UTC (permalink / raw)
  To: Artem Bityutskiy; +Cc: linux-mtd, linux-kernel

This is the second part of a patch to allow UBI to force the erasure of
particular logical eraseblock numbers. In this patch, a new function,
ubi_lnum_purge, is added that allows the caller to synchronously erase all
unmapped erase blocks whose LEB number corresponds to the parameter. This
requires a previous patch that stores the LEB number in struct ubi_work.

This was tested by disabling the call to do_work in ubi thread, which results
in the work queue remaining unless explicitly called to remove. UBIFS was
changed to call ubifs_leb_change 50 times for three different LEBs. Then the
new function was called to clear the queue for the three differnt LEB numbers
one at a time. The work queue was dumped each time and the selective removal
of the particular LEB numbers was observed.

Signed-off-by: Joel Reardon <reardonj@inf.ethz.ch>
---
 drivers/mtd/ubi/kapi.c  |   31 +++++++++++++++++++++++++++++++
 drivers/mtd/ubi/ubi.h   |    1 +
 drivers/mtd/ubi/wl.c    |   42 +++++++++++++++++++++++++++++++++++++++++-
 include/linux/mtd/ubi.h |    1 +
 4 files changed, 74 insertions(+), 1 deletions(-)

diff --git a/drivers/mtd/ubi/kapi.c b/drivers/mtd/ubi/kapi.c
index 33ede23..a236522 100644
--- a/drivers/mtd/ubi/kapi.c
+++ b/drivers/mtd/ubi/kapi.c
@@ -704,6 +704,37 @@ int ubi_sync(int ubi_num)
 }
 EXPORT_SYMBOL_GPL(ubi_sync);

+/**
+ * ubi_lnum_purge - synchronously erase unmapped PEBs by LEB number.
+ * @ubi_num: UBI device to erase PEBs
+ * @lnum: the LEB number to erase old unmapped PEBs.
+ *
+ * This function is designed to offer a means to ensure that the contents of
+ * old, unmapped LEBs are securely erased without having to flush the entire
+ * work queue of all erase blocks that need erasure.  Simply erasing the block
+ * at the time of unmapped is insufficient, as the wear-levelling subsystem
+ * may have already moved the contents. This function navigates the list of
+ * erase blocks that need erasures, and performs an immediate and synchronous
+ * erasure of any erase block that has held data for this particular @lnum.
+ * This may include eraseblocks that held older versions of the same @lnum.
+ * Returns zero in case of success and a negative error code in case of
+ * failure.
+ */
+int ubi_lnum_purge(int ubi_num, int lnum)
+{
+	int err;
+	struct ubi_device *ubi;
+
+	ubi = ubi_get_device(ubi_num);
+	if (!ubi)
+		return -ENODEV;
+
+	err = ubi_wl_flush_lnum(ubi, lnum);
+	ubi_put_device(ubi);
+	return err;
+}
+EXPORT_SYMBOL_GPL(ubi_lnum_purge);
+
 BLOCKING_NOTIFIER_HEAD(ubi_notifiers);

 /**
diff --git a/drivers/mtd/ubi/ubi.h b/drivers/mtd/ubi/ubi.h
index b3e5815..f91f965 100644
--- a/drivers/mtd/ubi/ubi.h
+++ b/drivers/mtd/ubi/ubi.h
@@ -535,6 +535,7 @@ int ubi_eba_init_scan(struct ubi_device *ubi, struct ubi_scan_info *si);
 int ubi_wl_get_peb(struct ubi_device *ubi);
 int ubi_wl_put_peb(struct ubi_device *ubi, int pnum, int torture, int lnum);
 int ubi_wl_flush(struct ubi_device *ubi);
+int ubi_wl_flush_lnum(struct ubi_device *ubi, int lnum);
 int ubi_wl_scrub_peb(struct ubi_device *ubi, int pnum);
 int ubi_wl_init_scan(struct ubi_device *ubi, struct ubi_scan_info *si);
 void ubi_wl_close(struct ubi_device *ubi);
diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
index 4e51735..56d9836 100644
--- a/drivers/mtd/ubi/wl.c
+++ b/drivers/mtd/ubi/wl.c
@@ -610,7 +610,6 @@ static int schedule_erase(struct ubi_device *ubi, struct ubi_wl_entry *e,
 	wl_wrk->e = e;
 	wl_wrk->torture = torture;
 	wl_wrk->lnum = lnum;
-
 	schedule_ubi_work(ubi, wl_wrk);
 	return 0;
 }
@@ -1237,6 +1236,47 @@ retry:
 }

 /**
+ * ubi_wl_flush_lnum - flush all pending works associated with a LEB number
+ * @ubi: UBI device description object
+ * @lnum: the LEB number only for which work should be synchronously executed
+ *
+ * This function executes pending works for PEBs which are associated with a
+ * particular LEB number. If one of the pending works fail, for instance, due
+ * to the erase block becoming bad block during erasure, then it returns the
+ * error without continuing, but removes the work from the queue. This
+ * function returns zero if no work on the queue remains for lnum and all work
+ * in the call executed successfully.
+ */
+int ubi_wl_flush_lnum(struct ubi_device *ubi, int lnum)
+{
+	int err = 0;
+	struct ubi_work *wrk, *tmp;
+
+	/* For each pending work, if it corresponds to the parameter @lnum,
+	 * then execute the work.
+	 */
+	dbg_wl("flush lnum %d", lnum);
+	list_for_each_entry_safe(wrk, tmp, &ubi->works, list) {
+		if (wrk->lnum == lnum) {
+			down_read(&ubi->work_sem);
+			spin_lock(&ubi->wl_lock);
+
+			list_del(&wrk->list);
+			ubi->works_count -= 1;
+			ubi_assert(ubi->works_count >= 0);
+			spin_unlock(&ubi->wl_lock);
+
+			err = wrk->func(ubi, wrk, 0);
+			up_read(&ubi->work_sem);
+			if (err)
+				return err;
+		}
+	}
+
+	return err;
+}
+
+/**
  * ubi_wl_flush - flush all pending works.
  * @ubi: UBI device description object
  *
diff --git a/include/linux/mtd/ubi.h b/include/linux/mtd/ubi.h
index 9838dce..8087e99 100644
--- a/include/linux/mtd/ubi.h
+++ b/include/linux/mtd/ubi.h
@@ -216,6 +216,7 @@ int ubi_leb_unmap(struct ubi_volume_desc *desc, int lnum);
 int ubi_leb_map(struct ubi_volume_desc *desc, int lnum);
 int ubi_is_mapped(struct ubi_volume_desc *desc, int lnum);
 int ubi_sync(int ubi_num);
+int ubi_lnum_purge(int ubi_num, int lnum);

 /*
  * This function is the same as the 'ubi_leb_read()' function, but it does not
-- 
1.7.5.4



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH V2] UBI: add ubi_lnum_purge function to clear work queue for a lnum
  2012-05-15  7:44 [PATCH V2] UBI: add ubi_lnum_purge function to clear work queue for a lnum Joel Reardon
@ 2012-05-15 11:40 ` Artem Bityutskiy
  2012-05-19  9:46   ` Joel Reardon
  2012-05-20 11:10 ` [PATCH V3] UBI: modify ubi_wl_flush " Joel Reardon
  1 sibling, 1 reply; 8+ messages in thread
From: Artem Bityutskiy @ 2012-05-15 11:40 UTC (permalink / raw)
  To: Joel Reardon; +Cc: linux-mtd, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 3501 bytes --]

On Tue, 2012-05-15 at 09:44 +0200, Joel Reardon wrote:
> This is the second part of a patch to allow UBI to force the erasure of
> particular logical eraseblock numbers. In this patch, a new function,
> ubi_lnum_purge, is added that allows the caller to synchronously erase all
> unmapped erase blocks whose LEB number corresponds to the parameter. This
> requires a previous patch that stores the LEB number in struct ubi_work.
> 
> This was tested by disabling the call to do_work in ubi thread, which results
> in the work queue remaining unless explicitly called to remove. UBIFS was
> changed to call ubifs_leb_change 50 times for three different LEBs. Then the
> new function was called to clear the queue for the three differnt LEB numbers
> one at a time. The work queue was dumped each time and the selective removal
> of the particular LEB numbers was observed.
> 
> Signed-off-by: Joel Reardon <reardonj@inf.ethz.ch>

No objections in general, and I can merge this as soon as you send the
final version. However, for this version I have several notes.

> +/**
> + * ubi_lnum_purge - synchronously erase unmapped PEBs by LEB number.
> + * @ubi_num: UBI device to erase PEBs
> + * @lnum: the LEB number to erase old unmapped PEBs.
> + *
> + * This function is designed to offer a means to ensure that the contents of
> + * old, unmapped LEBs are securely erased without having to flush the entire
> + * work queue of all erase blocks that need erasure.  Simply erasing the block
> + * at the time of unmapped is insufficient, as the wear-levelling subsystem
> + * may have already moved the contents. This function navigates the list of
> + * erase blocks that need erasures, and performs an immediate and synchronous
> + * erasure of any erase block that has held data for this particular @lnum.
> + * This may include eraseblocks that held older versions of the same @lnum.
> + * Returns zero in case of success and a negative error code in case of
> + * failure.
> + */
> +int ubi_lnum_purge(int ubi_num, int lnum)
> +{
> +	int err;
> +	struct ubi_device *ubi;
> +
> +	ubi = ubi_get_device(ubi_num);
> +	if (!ubi)
> +		return -ENODEV;
> +
> +	err = ubi_wl_flush_lnum(ubi, lnum);
> +	ubi_put_device(ubi);
> +	return err;
> +}
> +EXPORT_SYMBOL_GPL(ubi_lnum_purge);

Please, do not introduce a separate exported function for this. Instead,
add "lnum" argument to "ubi_wl_flush". Preserve the old behavior if lnum
is -1. Document this at the header comment. In your case you also need
to call mtd->sync() I think.

> +	dbg_wl("flush lnum %d", lnum);
> +	list_for_each_entry_safe(wrk, tmp, &ubi->works, list) {
> +		if (wrk->lnum == lnum) {
> +			down_read(&ubi->work_sem);
> +			spin_lock(&ubi->wl_lock);

But you cannot walk the ubi->works list without holding the spinlock.
Any one may add/remove elements to/from this list concurrently.

Take the work_sem at the beginning. Release at the very end.

Then you can do something like this:

int found = 1;

while (found) {
	found = 0;

	spin_lock(&ubi->wl_lock);
	list_for_each_entry(wrk, tmp, &ubi->works, list) {
		if (wrk->lnum == lnum) {
			list_del(&wrk->list);
			ubi->works_count -= 1;
			ubi_assert(ubi->works_count >= 0);
			spin_unlock(&ubi->wl_lock);

			err = wrk->func(ubi, wrk, 0);
			if (err)
				return err;

			spin_lock(&ubi->wl_lock);
			found = 1;
			break;
	}
	spin_unlock(&ubi->wl_lock);
}

-- 
Best Regards,
Artem Bityutskiy

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V2] UBI: add ubi_lnum_purge function to clear work queue for a lnum
  2012-05-15 11:40 ` Artem Bityutskiy
@ 2012-05-19  9:46   ` Joel Reardon
  2012-05-19 10:47     ` Artem Bityutskiy
  0 siblings, 1 reply; 8+ messages in thread
From: Joel Reardon @ 2012-05-19  9:46 UTC (permalink / raw)
  To: Artem Bityutskiy; +Cc: linux-mtd, linux-kernel

> Take the work_sem at the beginning. Release at the very end.
>
> Then you can do something like this:
>
> int found = 1;
>
> while (found) {
> 	found = 0;
>
> 	spin_lock(&ubi->wl_lock);
> 	list_for_each_entry(wrk, tmp, &ubi->works, list) {
> 		if (wrk->lnum == lnum) {
> 			list_del(&wrk->list);
> 			ubi->works_count -= 1;
> 			ubi_assert(ubi->works_count >= 0);
> 			spin_unlock(&ubi->wl_lock);
>
> 			err = wrk->func(ubi, wrk, 0);
> 			if (err)
> 				return err;
>
> 			spin_lock(&ubi->wl_lock);
> 			found = 1;
> 			break;
> 	}
> 	spin_unlock(&ubi->wl_lock);
> }
>


If I use list_for_each_entry_safe(), it protects against deleting as it
iterates. If I take the work_sem first, is it okay to do a simple
traversal instead of one traversal per removed item? Even if another
thread adds new work for the same vol_id/lnum, its okay, because the
caller of this function only cares about vol_id/lnums erasures that
it knows are currently on the worklist.

Cheers,
Joel Reardon

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V2] UBI: add ubi_lnum_purge function to clear work queue for a lnum
  2012-05-19  9:46   ` Joel Reardon
@ 2012-05-19 10:47     ` Artem Bityutskiy
  2012-05-19 10:52       ` Artem Bityutskiy
  0 siblings, 1 reply; 8+ messages in thread
From: Artem Bityutskiy @ 2012-05-19 10:47 UTC (permalink / raw)
  To: Joel Reardon; +Cc: linux-mtd, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2241 bytes --]

On Sat, 2012-05-19 at 11:46 +0200, Joel Reardon wrote:
> > Take the work_sem at the beginning. Release at the very end.
> >
> > Then you can do something like this:
> >
> > int found = 1;
> >
> > while (found) {
> > 	found = 0;
> >
> > 	spin_lock(&ubi->wl_lock);
> > 	list_for_each_entry(wrk, tmp, &ubi->works, list) {
> > 		if (wrk->lnum == lnum) {
> > 			list_del(&wrk->list);
> > 			ubi->works_count -= 1;
> > 			ubi_assert(ubi->works_count >= 0);
> > 			spin_unlock(&ubi->wl_lock);
> >
> > 			err = wrk->func(ubi, wrk, 0);
> > 			if (err)
> > 				return err;
> >
> > 			spin_lock(&ubi->wl_lock);
> > 			found = 1;
> > 			break;
> > 	}
> > 	spin_unlock(&ubi->wl_lock);
> > }
> >
> 
> 
> If I use list_for_each_entry_safe(), it protects against deleting as it
> iterates.

Yes, but do not be confused, it does not protect against concurrent
deleting. I would not use word "protect" at all - it is just a version
of list_for_each_entry() which we use when we delete list elements
inside the loop. It requires another temporary variable.

>  If I take the work_sem first, is it okay to do a simple
> traversal instead of one traversal per removed item?

No, work_sem does not protect the list. It is just a bit unusual way of
syncing with all the processes which are performing UBI works. Take a
closer look - there is only one single place where we take it for
writing - ubi_wl_flush(). Everywhere else we take it for reading which
means that many processes may enter the critical section, and may
concurrently add/remove elements to/from the list. The list is protected
by the spinlock.

>  Even if another
> thread adds new work for the same vol_id/lnum, its okay, because the
> caller of this function only cares about vol_id/lnums erasures that
> it knows are currently on the worklist.

If you walk the list and read pointers, and someone else modifies them,
you may be in trouble. You cannot traverse the list without the
spinlock. And once you dropped the spinlock - you have to start over
because your 'wrk' pointer may point to a non-existing object, because
this object might have been already freed. This is why I added 2 loops.

-- 
Best Regards,
Artem Bityutskiy

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V2] UBI: add ubi_lnum_purge function to clear work queue for a lnum
  2012-05-19 10:47     ` Artem Bityutskiy
@ 2012-05-19 10:52       ` Artem Bityutskiy
  0 siblings, 0 replies; 8+ messages in thread
From: Artem Bityutskiy @ 2012-05-19 10:52 UTC (permalink / raw)
  To: Joel Reardon; +Cc: linux-mtd, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 712 bytes --]

On Sat, 2012-05-19 at 13:47 +0300, Artem Bityutskiy wrote:
> If you walk the list and read pointers, and someone else modifies them,
> you may be in trouble. You cannot traverse the list without the
> spinlock. And once you dropped the spinlock - you have to start over
> because your 'wrk' pointer may point to a non-existing object, because
> this object might have been already freed. This is why I added 2 loops.

Actually we ourselves free it in ->func() :-) I think it is saner to not
even use the list_for_each_entry(), but just take the element off the
head, unlock the spinlock use "container_of()", and call ->func(). So we
need only one loop, not 2.

-- 
Best Regards,
Artem Bityutskiy

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V3] UBI: modify ubi_wl_flush function to clear work queue for a lnum
  2012-05-15  7:44 [PATCH V2] UBI: add ubi_lnum_purge function to clear work queue for a lnum Joel Reardon
  2012-05-15 11:40 ` Artem Bityutskiy
@ 2012-05-20 11:10 ` Joel Reardon
  2012-05-20 19:27   ` [PATCH V4] " Joel Reardon
  1 sibling, 1 reply; 8+ messages in thread
From: Joel Reardon @ 2012-05-20 11:10 UTC (permalink / raw)
  To: Artem Bityutskiy; +Cc: linux-mtd, linux-kernel


This patch modifies ubi_wl_flush to force the erasure of
particular volume id / logical eraseblock number pairs. Previous functionality
is preserved when passing -1 for both values. The locations where ubi_wl_flush
were called are appropriately changed: ubi_leb_erase only flushes for the
erased LEB, and ubi_create_volume forces only flushing for its volume id.
External code can call this new feature via the new function ubi_flush() added
to kapi.c, which simply passes through to ubi_wl_flush().

This was tested by disabling the call to do_work in ubi thread, which results
in the work queue remaining unless explicitly called to remove. UBIFS was
changed to call ubifs_leb_change 50 times for three different LEBs. Then the
new function was called to clear the queue for the three differnt LEB numbers
one at a time. The work queue was dumped each time and the selective removal
of the particular LEB numbers was observed. Extra checks were enabled and
ubifs's integck was also run.

Signed-off-by: Joel Reardon <reardonj@inf.ethz.ch>
---
 drivers/mtd/ubi/cdev.c  |    2 +-
 drivers/mtd/ubi/kapi.c  |   29 +++++++++++++++++++++-
 drivers/mtd/ubi/ubi.h   |    2 +-
 drivers/mtd/ubi/upd.c   |    4 +-
 drivers/mtd/ubi/vmt.c   |    2 +-
 drivers/mtd/ubi/wl.c    |   61 +++++++++++++++++++++++++++-------------------
 include/linux/mtd/ubi.h |    1 +
 7 files changed, 70 insertions(+), 31 deletions(-)

diff --git a/drivers/mtd/ubi/cdev.c b/drivers/mtd/ubi/cdev.c
index f406112..b60e6f4 100644
--- a/drivers/mtd/ubi/cdev.c
+++ b/drivers/mtd/ubi/cdev.c
@@ -514,7 +514,7 @@ static long vol_cdev_ioctl(struct file *file, unsigned int cmd,
 		if (err)
 			break;

-		err = ubi_wl_flush(ubi);
+		err = ubi_wl_flush(ubi, -1, -1);
 		break;
 	}

diff --git a/drivers/mtd/ubi/kapi.c b/drivers/mtd/ubi/kapi.c
index 33ede23..1666541 100644
--- a/drivers/mtd/ubi/kapi.c
+++ b/drivers/mtd/ubi/kapi.c
@@ -551,7 +551,7 @@ int ubi_leb_erase(struct ubi_volume_desc *desc, int lnum)
 	if (err)
 		return err;

-	return ubi_wl_flush(ubi);
+	return ubi_wl_flush(ubi, vol->vol_id, lnum);
 }
 EXPORT_SYMBOL_GPL(ubi_leb_erase);

@@ -704,6 +704,33 @@ int ubi_sync(int ubi_num)
 }
 EXPORT_SYMBOL_GPL(ubi_sync);

+/**
+ * ubi_flush - flush UBI work queue
+ * @ubi_num: UBI device to flush work queue
+ * @vol_id: volume id to flush for
+ * @lnum: logical eraseblock number to flush for
+ *
+ * This function executes all pending works for a particular volume id /
+ * logical eraseblock number pair. If either value is set to -1, then it acts
+ * as a wildcard for all of the corresponding volume numbers or logical
+ * eraseblock numbers. It returns zero in case of success and a negative error
+ * code in case of failure.
+ */
+int ubi_flush(int ubi_num, int vol_id, int lnum)
+{
+	struct ubi_device *ubi;
+	int err = 0;
+
+	ubi = ubi_get_device(ubi_num);
+	if (!ubi)
+		return -ENODEV;
+
+	err = ubi_wl_flush(ubi, vol_id, lnum);
+	ubi_put_device(ubi);
+	return err;
+}
+EXPORT_SYMBOL_GPL(ubi_flush);
+
 BLOCKING_NOTIFIER_HEAD(ubi_notifiers);

 /**
diff --git a/drivers/mtd/ubi/ubi.h b/drivers/mtd/ubi/ubi.h
index 05aa452..0664ddc 100644
--- a/drivers/mtd/ubi/ubi.h
+++ b/drivers/mtd/ubi/ubi.h
@@ -535,7 +535,7 @@ int ubi_eba_init_scan(struct ubi_device *ubi, struct ubi_scan_info *si);
 int ubi_wl_get_peb(struct ubi_device *ubi);
 int ubi_wl_put_peb(struct ubi_device *ubi, int vol_id, int lnum,
 		   int pnum, int torture);
-int ubi_wl_flush(struct ubi_device *ubi);
+int ubi_wl_flush(struct ubi_device *ubi, int vol_id, int lnum);
 int ubi_wl_scrub_peb(struct ubi_device *ubi, int pnum);
 int ubi_wl_init_scan(struct ubi_device *ubi, struct ubi_scan_info *si);
 void ubi_wl_close(struct ubi_device *ubi);
diff --git a/drivers/mtd/ubi/upd.c b/drivers/mtd/ubi/upd.c
index 11a28f9..8025124 100644
--- a/drivers/mtd/ubi/upd.c
+++ b/drivers/mtd/ubi/upd.c
@@ -147,7 +147,7 @@ int ubi_start_update(struct ubi_device *ubi, struct ubi_volume *vol,
 	}

 	if (bytes == 0) {
-		err = ubi_wl_flush(ubi);
+		err = ubi_wl_flush(ubi, -1, -1);
 		if (err)
 			return err;

@@ -361,7 +361,7 @@ int ubi_more_update_data(struct ubi_device *ubi, struct ubi_volume *vol,

 	ubi_assert(vol->upd_received <= vol->upd_bytes);
 	if (vol->upd_received == vol->upd_bytes) {
-		err = ubi_wl_flush(ubi);
+		err = ubi_wl_flush(ubi, -1, -1);
 		if (err)
 			return err;
 		/* The update is finished, clear the update marker */
diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c
index 863835f..01cf61a 100644
--- a/drivers/mtd/ubi/vmt.c
+++ b/drivers/mtd/ubi/vmt.c
@@ -284,7 +284,7 @@ int ubi_create_volume(struct ubi_device *ubi, struct ubi_mkvol_req *req)
 	 * Finish all pending erases because there may be some LEBs belonging
 	 * to the same volume ID.
 	 */
-	err = ubi_wl_flush(ubi);
+	err = ubi_wl_flush(ubi, vol_id, -1);
 	if (err)
 		goto out_acc;

diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
index 3df1d48..83d09a6 100644
--- a/drivers/mtd/ubi/wl.c
+++ b/drivers/mtd/ubi/wl.c
@@ -1248,44 +1248,55 @@ retry:
 /**
  * ubi_wl_flush - flush all pending works.
  * @ubi: UBI device description object
+ * @vol_id: the volume id to flush for
+ * @lnum: the logical eraseblock number to flush for
  *
- * This function returns zero in case of success and a negative error code in
- * case of failure.
+ * This function executes all pending works for a particular volume id /
+ * logical eraseblock number pair. If either value is set to -1, then it acts
+ * as a wildcard for all of the corresponding volume numbers or logical
+ * eraseblock numbers. It returns zero in case of success and a negative error
+ * code in case of failure.
  */
-int ubi_wl_flush(struct ubi_device *ubi)
+int ubi_wl_flush(struct ubi_device *ubi, int vol_id, int lnum)
 {
-	int err;
+	int err = 0;
+	int found = 1;

 	/*
 	 * Erase while the pending works queue is not empty, but not more than
 	 * the number of currently pending works.
 	 */
-	dbg_wl("flush (%d pending works)", ubi->works_count);
-	while (ubi->works_count) {
-		err = do_work(ubi);
-		if (err)
-			return err;
-	}
+	dbg_wl("flush pending work for LEB %d:%d (%d pending works)",
+	       vol_id, lnum, ubi->works_count);

-	/*
-	 * Make sure all the works which have been done in parallel are
-	 * finished.
-	 */
 	down_write(&ubi->work_sem);
-	up_write(&ubi->work_sem);
+	while (found) {
+		struct ubi_work *wrk;
+		found = 0;

-	/*
-	 * And in case last was the WL worker and it canceled the LEB
-	 * movement, flush again.
-	 */
-	while (ubi->works_count) {
-		dbg_wl("flush more (%d pending works)", ubi->works_count);
-		err = do_work(ubi);
-		if (err)
-			return err;
+		spin_lock(&ubi->wl_lock);
+		list_for_each_entry(wrk, &ubi->works, list) {
+			if ((vol_id == -1 || wrk->vol_id == vol_id) &&
+			    (lnum == -1 || wrk->lnum == lnum)) {
+				list_del(&wrk->list);
+				ubi->works_count -= 1;
+				ubi_assert(ubi->works_count >= 0);
+				spin_unlock(&ubi->wl_lock);
+
+				err = wrk->func(ubi, wrk, 0);
+				if (err)
+					goto out;
+				spin_lock(&ubi->wl_lock);
+				found = 1;
+				break;
+			}
+		}
+		spin_unlock(&ubi->wl_lock);
 	}

-	return 0;
+out:
+	up_write(&ubi->work_sem);
+	return err;
 }

 /**
diff --git a/include/linux/mtd/ubi.h b/include/linux/mtd/ubi.h
index 9838dce..e9f8088 100644
--- a/include/linux/mtd/ubi.h
+++ b/include/linux/mtd/ubi.h
@@ -216,6 +216,7 @@ int ubi_leb_unmap(struct ubi_volume_desc *desc, int lnum);
 int ubi_leb_map(struct ubi_volume_desc *desc, int lnum);
 int ubi_is_mapped(struct ubi_volume_desc *desc, int lnum);
 int ubi_sync(int ubi_num);
+int ubi_flush(int ubi_num, int vol_id, int lnum);

 /*
  * This function is the same as the 'ubi_leb_read()' function, but it does not
-- 
1.7.5.4



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH V4] UBI: modify ubi_wl_flush function to clear work queue for a lnum
  2012-05-20 11:10 ` [PATCH V3] UBI: modify ubi_wl_flush " Joel Reardon
@ 2012-05-20 19:27   ` Joel Reardon
  2012-05-21 11:41     ` Artem Bityutskiy
  0 siblings, 1 reply; 8+ messages in thread
From: Joel Reardon @ 2012-05-20 19:27 UTC (permalink / raw)
  To: Artem Bityutskiy; +Cc: linux-mtd, linux-kernel

This patch modifies ubi_wl_flush to force the erasure of
particular volume id / logical eraseblock number pairs. Previous functionality
is preserved when passing UBI_UNKNOWN for both values. The locations where ubi_wl_flush
were called are appropriately changed: ubi_leb_erase only flushes for the
erased LEB, and ubi_create_volume forces only flushing for its volume id.
External code can call this new feature via the new function ubi_flush() added
to kapi.c, which simply passes through to ubi_wl_flush().

This was tested by disabling the call to do_work in ubi thread, which results
in the work queue remaining unless explicitly called to remove. UBIFS was
changed to call ubifs_leb_change 50 times for four different LEBs. Then the
new function was called to clear the queue: passing wrong volume ids / lnum,
correct ones, and finally UBI_UNKNOWN for both to ensure it was finally all
cleard. The work queue was dumped each time and the selective removal
of the particular LEB numbers was observed. Extra checks were enabled and
ubifs's integck was also run. Finally, the drive was repeatedly filled and emptied to
ensure that the queue was cleared normally.

This was retested after rebase.

Signed-off-by: Joel Reardon <reardonj@inf.ethz.ch>
---
 drivers/mtd/ubi/cdev.c  |    2 +-
 drivers/mtd/ubi/kapi.c  |   29 +++++++++++++++++++++-
 drivers/mtd/ubi/ubi.h   |    2 +-
 drivers/mtd/ubi/upd.c   |    4 +-
 drivers/mtd/ubi/vmt.c   |    2 +-
 drivers/mtd/ubi/wl.c    |   61 +++++++++++++++++++++++++++-------------------
 include/linux/mtd/ubi.h |    1 +
 7 files changed, 70 insertions(+), 31 deletions(-)

diff --git a/drivers/mtd/ubi/cdev.c b/drivers/mtd/ubi/cdev.c
index 2364c00..7f934e7 100644
--- a/drivers/mtd/ubi/cdev.c
+++ b/drivers/mtd/ubi/cdev.c
@@ -514,7 +514,7 @@ static long vol_cdev_ioctl(struct file *file, unsigned int cmd,
 		if (err)
 			break;

-		err = ubi_wl_flush(ubi);
+		err = ubi_wl_flush(ubi, UBI_UNKNOWN, UBI_UNKNOWN);
 		break;
 	}

diff --git a/drivers/mtd/ubi/kapi.c b/drivers/mtd/ubi/kapi.c
index d76fe47..a3d9668 100644
--- a/drivers/mtd/ubi/kapi.c
+++ b/drivers/mtd/ubi/kapi.c
@@ -551,7 +551,7 @@ int ubi_leb_erase(struct ubi_volume_desc *desc, int lnum)
 	if (err)
 		return err;

-	return ubi_wl_flush(ubi);
+	return ubi_wl_flush(ubi, vol->vol_id, lnum);
 }
 EXPORT_SYMBOL_GPL(ubi_leb_erase);

@@ -704,6 +704,33 @@ int ubi_sync(int ubi_num)
 }
 EXPORT_SYMBOL_GPL(ubi_sync);

+/**
+ * ubi_flush - flush UBI work queue
+ * @ubi_num: UBI device to flush work queue
+ * @vol_id: volume id to flush for
+ * @lnum: logical eraseblock number to flush for
+ *
+ * This function executes all pending works for a particular volume id /
+ * logical eraseblock number pair. If either value is set to UBI_ALL, then
+ * it acts as a wildcard for all of the corresponding volume numbers or logical
+ * eraseblock numbers. It returns zero in case of success and a negative error
+ * code in case of failure.
+ */
+int ubi_flush(int ubi_num, int vol_id, int lnum)
+{
+	struct ubi_device *ubi;
+	int err = 0;
+
+	ubi = ubi_get_device(ubi_num);
+	if (!ubi)
+		return -ENODEV;
+
+	err = ubi_wl_flush(ubi, vol_id, lnum);
+	ubi_put_device(ubi);
+	return err;
+}
+EXPORT_SYMBOL_GPL(ubi_flush);
+
 BLOCKING_NOTIFIER_HEAD(ubi_notifiers);

 /**
diff --git a/drivers/mtd/ubi/ubi.h b/drivers/mtd/ubi/ubi.h
index f45d9ea..a19ff9e 100644
--- a/drivers/mtd/ubi/ubi.h
+++ b/drivers/mtd/ubi/ubi.h
@@ -666,7 +666,7 @@ int ubi_eba_init(struct ubi_device *ubi, struct ubi_attach_info *ai);
 int ubi_wl_get_peb(struct ubi_device *ubi);
 int ubi_wl_put_peb(struct ubi_device *ubi, int vol_id, int lnum,
 		   int pnum, int torture);
-int ubi_wl_flush(struct ubi_device *ubi);
+int ubi_wl_flush(struct ubi_device *ubi, int vol_id, int lnum);
 int ubi_wl_scrub_peb(struct ubi_device *ubi, int pnum);
 int ubi_wl_init(struct ubi_device *ubi, struct ubi_attach_info *ai);
 void ubi_wl_close(struct ubi_device *ubi);
diff --git a/drivers/mtd/ubi/upd.c b/drivers/mtd/ubi/upd.c
index 11a28f9..944656e 100644
--- a/drivers/mtd/ubi/upd.c
+++ b/drivers/mtd/ubi/upd.c
@@ -147,7 +147,7 @@ int ubi_start_update(struct ubi_device *ubi, struct ubi_volume *vol,
 	}

 	if (bytes == 0) {
-		err = ubi_wl_flush(ubi);
+		err = ubi_wl_flush(ubi, UBI_UNKNOWN, UBI_UNKNOWN);
 		if (err)
 			return err;

@@ -361,7 +361,7 @@ int ubi_more_update_data(struct ubi_device *ubi, struct ubi_volume *vol,

 	ubi_assert(vol->upd_received <= vol->upd_bytes);
 	if (vol->upd_received == vol->upd_bytes) {
-		err = ubi_wl_flush(ubi);
+		err = ubi_wl_flush(ubi, UBI_UNKNOWN, UBI_UNKNOWN);
 		if (err)
 			return err;
 		/* The update is finished, clear the update marker */
diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c
index e4b897a..d3e6664 100644
--- a/drivers/mtd/ubi/vmt.c
+++ b/drivers/mtd/ubi/vmt.c
@@ -280,7 +280,7 @@ int ubi_create_volume(struct ubi_device *ubi, struct ubi_mkvol_req *req)
 	 * Finish all pending erases because there may be some LEBs belonging
 	 * to the same volume ID.
 	 */
-	err = ubi_wl_flush(ubi);
+	err = ubi_wl_flush(ubi, vol_id, UBI_UNKNOWN);
 	if (err)
 		goto out_acc;

diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
index 70ebfa7..5556bab 100644
--- a/drivers/mtd/ubi/wl.c
+++ b/drivers/mtd/ubi/wl.c
@@ -1241,44 +1241,55 @@ retry:
 /**
  * ubi_wl_flush - flush all pending works.
  * @ubi: UBI device description object
+ * @vol_id: the volume id to flush for
+ * @lnum: the logical eraseblock number to flush for
  *
- * This function returns zero in case of success and a negative error code in
- * case of failure.
+ * This function executes all pending works for a particular volume id /
+ * logical eraseblock number pair. If either value is set to -1, then it acts
+ * as a wildcard for all of the corresponding volume numbers or logical
+ * eraseblock numbers. It returns zero in case of success and a negative error
+ * code in case of failure.
  */
-int ubi_wl_flush(struct ubi_device *ubi)
+int ubi_wl_flush(struct ubi_device *ubi, int vol_id, int lnum)
 {
-	int err;
+	int err = 0;
+	int found = 1;

 	/*
 	 * Erase while the pending works queue is not empty, but not more than
 	 * the number of currently pending works.
 	 */
-	dbg_wl("flush (%d pending works)", ubi->works_count);
-	while (ubi->works_count) {
-		err = do_work(ubi);
-		if (err)
-			return err;
-	}
+	dbg_wl("flush pending work for LEB %d:%d (%d pending works)",
+	       vol_id, lnum, ubi->works_count);

-	/*
-	 * Make sure all the works which have been done in parallel are
-	 * finished.
-	 */
 	down_write(&ubi->work_sem);
-	up_write(&ubi->work_sem);
+	while (found) {
+		struct ubi_work *wrk;
+		found = 0;

-	/*
-	 * And in case last was the WL worker and it canceled the LEB
-	 * movement, flush again.
-	 */
-	while (ubi->works_count) {
-		dbg_wl("flush more (%d pending works)", ubi->works_count);
-		err = do_work(ubi);
-		if (err)
-			return err;
+		spin_lock(&ubi->wl_lock);
+		list_for_each_entry(wrk, &ubi->works, list) {
+			if ((vol_id == UBI_UNKNOWN || wrk->vol_id == vol_id) &&
+			    (lnum == UBI_UNKNOWN || wrk->lnum == lnum)) {
+				list_del(&wrk->list);
+				ubi->works_count -= 1;
+				ubi_assert(ubi->works_count >= 0);
+				spin_unlock(&ubi->wl_lock);
+
+				err = wrk->func(ubi, wrk, 0);
+				if (err)
+					goto out;
+				spin_lock(&ubi->wl_lock);
+				found = 1;
+				break;
+			}
+		}
+		spin_unlock(&ubi->wl_lock);
 	}

-	return 0;
+out:
+	up_write(&ubi->work_sem);
+	return err;
 }

 /**
diff --git a/include/linux/mtd/ubi.h b/include/linux/mtd/ubi.h
index f636f5d..c3918a0 100644
--- a/include/linux/mtd/ubi.h
+++ b/include/linux/mtd/ubi.h
@@ -219,6 +219,7 @@ int ubi_leb_unmap(struct ubi_volume_desc *desc, int lnum);
 int ubi_leb_map(struct ubi_volume_desc *desc, int lnum);
 int ubi_is_mapped(struct ubi_volume_desc *desc, int lnum);
 int ubi_sync(int ubi_num);
+int ubi_flush(int ubi_num, int vol_id, int lnum);

 /*
  * This function is the same as the 'ubi_leb_read()' function, but it does not
-- 
1.7.5.4



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH V4] UBI: modify ubi_wl_flush function to clear work queue for a lnum
  2012-05-20 19:27   ` [PATCH V4] " Joel Reardon
@ 2012-05-21 11:41     ` Artem Bityutskiy
  0 siblings, 0 replies; 8+ messages in thread
From: Artem Bityutskiy @ 2012-05-21 11:41 UTC (permalink / raw)
  To: Joel Reardon; +Cc: linux-mtd, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1484 bytes --]

On Sun, 2012-05-20 at 21:27 +0200, Joel Reardon wrote:
> This patch modifies ubi_wl_flush to force the erasure of
> particular volume id / logical eraseblock number pairs. Previous functionality
> is preserved when passing UBI_UNKNOWN for both values. The locations where ubi_wl_flush
> were called are appropriately changed: ubi_leb_erase only flushes for the
> erased LEB, and ubi_create_volume forces only flushing for its volume id.
> External code can call this new feature via the new function ubi_flush() added
> to kapi.c, which simply passes through to ubi_wl_flush().
> 
> This was tested by disabling the call to do_work in ubi thread, which results
> in the work queue remaining unless explicitly called to remove. UBIFS was
> changed to call ubifs_leb_change 50 times for four different LEBs. Then the
> new function was called to clear the queue: passing wrong volume ids / lnum,
> correct ones, and finally UBI_UNKNOWN for both to ensure it was finally all
> cleard. The work queue was dumped each time and the selective removal
> of the particular LEB numbers was observed. Extra checks were enabled and
> ubifs's integck was also run. Finally, the drive was repeatedly filled and emptied to
> ensure that the queue was cleared normally.

I've amended this patch to use UBI_ALL instead, and pushed to
linux-ubi.git tree, and in your branch as well. I'll send it to Linus
this merge window as well. Thanks!

-- 
Best Regards,
Artem Bityutskiy

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2012-05-21 11:37 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-05-15  7:44 [PATCH V2] UBI: add ubi_lnum_purge function to clear work queue for a lnum Joel Reardon
2012-05-15 11:40 ` Artem Bityutskiy
2012-05-19  9:46   ` Joel Reardon
2012-05-19 10:47     ` Artem Bityutskiy
2012-05-19 10:52       ` Artem Bityutskiy
2012-05-20 11:10 ` [PATCH V3] UBI: modify ubi_wl_flush " Joel Reardon
2012-05-20 19:27   ` [PATCH V4] " Joel Reardon
2012-05-21 11:41     ` Artem Bityutskiy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).