linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC] Don'd hold work_sem while calling worker functions
@ 2014-09-16 13:26 Richard Weinberger
  2014-09-16 13:26 ` [PATCH 1/2] UBI: Call worker functions without work_sem held Richard Weinberger
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Richard Weinberger @ 2014-09-16 13:26 UTC (permalink / raw)
  To: dedekind1; +Cc: linux-mtd, linux-kernel

I fail to see why we need work_sem while wrk->func() is executed.
Artem, do you have an idea?

Having the wear_leveling_worker() called without work_sem held
would simplify the fastmap code too. I'm currently reworking some
of it's code and I'm in locking hell. 8-)

Thanks,
//richard

[PATCH 1/2] UBI: Call worker functions without work_sem held
[PATCH 2/2] UBI: Get rid of __schedule_ubi_work()

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 1/2] UBI: Call worker functions without work_sem held
  2014-09-16 13:26 [RFC] Don'd hold work_sem while calling worker functions Richard Weinberger
@ 2014-09-16 13:26 ` Richard Weinberger
  2014-09-16 13:26 ` [PATCH 2/2] UBI: Get rid of __schedule_ubi_work() Richard Weinberger
  2014-09-17  8:42 ` [RFC] Don'd hold work_sem while calling worker functions Artem Bityutskiy
  2 siblings, 0 replies; 4+ messages in thread
From: Richard Weinberger @ 2014-09-16 13:26 UTC (permalink / raw)
  To: dedekind1; +Cc: linux-mtd, linux-kernel, Richard Weinberger

There is no need to call the worker function with work_sem held.
We only need the semaphore to synchronize workers and protect
the work list.

Signed-off-by: Richard Weinberger <richard@nod.at>
---
 drivers/mtd/ubi/wl.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
index 20f4917..b1e2ed1 100644
--- a/drivers/mtd/ubi/wl.c
+++ b/drivers/mtd/ubi/wl.c
@@ -245,6 +245,7 @@ static int do_work(struct ubi_device *ubi)
 	ubi->works_count -= 1;
 	ubi_assert(ubi->works_count >= 0);
 	spin_unlock(&ubi->wl_lock);
+	up_read(&ubi->work_sem);
 
 	/*
 	 * Call the worker function. Do not touch the work structure
@@ -254,7 +255,6 @@ static int do_work(struct ubi_device *ubi)
 	err = wrk->func(ubi, wrk, 0);
 	if (err)
 		ubi_err("work failed with error code %d", err);
-	up_read(&ubi->work_sem);
 
 	return err;
 }
@@ -1452,7 +1452,7 @@ static int erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk,
 		serve_prot_queue(ubi);
 
 		/* And take care about wear-leveling */
-		err = ensure_wear_leveling(ubi, 1);
+		err = ensure_wear_leveling(ubi, 0);
 		return err;
 	}
 
@@ -1730,13 +1730,14 @@ int ubi_wl_flush(struct ubi_device *ubi, int vol_id, int lnum)
 				ubi->works_count -= 1;
 				ubi_assert(ubi->works_count >= 0);
 				spin_unlock(&ubi->wl_lock);
+				up_read(&ubi->work_sem);
 
 				err = wrk->func(ubi, wrk, 0);
 				if (err) {
-					up_read(&ubi->work_sem);
 					return err;
 				}
 
+				down_read(&ubi->work_sem);
 				spin_lock(&ubi->wl_lock);
 				found = 1;
 				break;
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 2/2] UBI: Get rid of __schedule_ubi_work()
  2014-09-16 13:26 [RFC] Don'd hold work_sem while calling worker functions Richard Weinberger
  2014-09-16 13:26 ` [PATCH 1/2] UBI: Call worker functions without work_sem held Richard Weinberger
@ 2014-09-16 13:26 ` Richard Weinberger
  2014-09-17  8:42 ` [RFC] Don'd hold work_sem while calling worker functions Artem Bityutskiy
  2 siblings, 0 replies; 4+ messages in thread
From: Richard Weinberger @ 2014-09-16 13:26 UTC (permalink / raw)
  To: dedekind1; +Cc: linux-mtd, linux-kernel, Richard Weinberger

As we got rid of work_sem held in workers we can remove
__schedule_ubi_work() too.

Signed-off-by: Richard Weinberger <richard@nod.at>
---
 drivers/mtd/ubi/wl.c | 35 +++++++++--------------------------
 1 file changed, 9 insertions(+), 26 deletions(-)

diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
index b1e2ed1..ba15d33 100644
--- a/drivers/mtd/ubi/wl.c
+++ b/drivers/mtd/ubi/wl.c
@@ -830,15 +830,16 @@ repeat:
 }
 
 /**
- * __schedule_ubi_work - schedule a work.
+ * schedule_ubi_work - schedule a work.
  * @ubi: UBI device description object
  * @wrk: the work to schedule
  *
  * This function adds a work defined by @wrk to the tail of the pending works
- * list. Can only be used of ubi->work_sem is already held in read mode!
+ * list.
  */
-static void __schedule_ubi_work(struct ubi_device *ubi, struct ubi_work *wrk)
+static void schedule_ubi_work(struct ubi_device *ubi, struct ubi_work *wrk)
 {
+	down_read(&ubi->work_sem);
 	spin_lock(&ubi->wl_lock);
 	list_add_tail(&wrk->list, &ubi->works);
 	ubi_assert(ubi->works_count >= 0);
@@ -846,20 +847,6 @@ static void __schedule_ubi_work(struct ubi_device *ubi, struct ubi_work *wrk)
 	if (ubi->thread_enabled && !ubi_dbg_is_bgt_disabled(ubi))
 		wake_up_process(ubi->bgt_thread);
 	spin_unlock(&ubi->wl_lock);
-}
-
-/**
- * schedule_ubi_work - schedule a work.
- * @ubi: UBI device description object
- * @wrk: the work to schedule
- *
- * This function adds a work defined by @wrk to the tail of the pending works
- * list.
- */
-static void schedule_ubi_work(struct ubi_device *ubi, struct ubi_work *wrk)
-{
-	down_read(&ubi->work_sem);
-	__schedule_ubi_work(ubi, wrk);
 	up_read(&ubi->work_sem);
 }
 
@@ -1304,13 +1291,12 @@ out_cancel:
 /**
  * ensure_wear_leveling - schedule wear-leveling if it is needed.
  * @ubi: UBI device description object
- * @nested: set to non-zero if this function is called from UBI worker
  *
  * This function checks if it is time to start wear-leveling and schedules it
  * if yes. This function returns zero in case of success and a negative error
  * code in case of failure.
  */
-static int ensure_wear_leveling(struct ubi_device *ubi, int nested)
+static int ensure_wear_leveling(struct ubi_device *ubi)
 {
 	int err = 0;
 	struct ubi_wl_entry *e1;
@@ -1357,10 +1343,7 @@ static int ensure_wear_leveling(struct ubi_device *ubi, int nested)
 
 	wrk->anchor = 0;
 	wrk->func = &wear_leveling_worker;
-	if (nested)
-		__schedule_ubi_work(ubi, wrk);
-	else
-		schedule_ubi_work(ubi, wrk);
+	schedule_ubi_work(ubi, wrk);
 	return err;
 
 out_cancel:
@@ -1452,7 +1435,7 @@ static int erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk,
 		serve_prot_queue(ubi);
 
 		/* And take care about wear-leveling */
-		err = ensure_wear_leveling(ubi, 0);
+		err = ensure_wear_leveling(ubi);
 		return err;
 	}
 
@@ -1690,7 +1673,7 @@ retry:
 	 * Technically scrubbing is the same as wear-leveling, so it is done
 	 * by the WL worker.
 	 */
-	return ensure_wear_leveling(ubi, 0);
+	return ensure_wear_leveling(ubi);
 }
 
 /**
@@ -1991,7 +1974,7 @@ int ubi_wl_init(struct ubi_device *ubi, struct ubi_attach_info *ai)
 	ubi->rsvd_pebs += reserved_pebs;
 
 	/* Schedule wear-leveling if needed */
-	err = ensure_wear_leveling(ubi, 0);
+	err = ensure_wear_leveling(ubi);
 	if (err)
 		goto out_free;
 
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [RFC] Don'd hold work_sem while calling worker functions
  2014-09-16 13:26 [RFC] Don'd hold work_sem while calling worker functions Richard Weinberger
  2014-09-16 13:26 ` [PATCH 1/2] UBI: Call worker functions without work_sem held Richard Weinberger
  2014-09-16 13:26 ` [PATCH 2/2] UBI: Get rid of __schedule_ubi_work() Richard Weinberger
@ 2014-09-17  8:42 ` Artem Bityutskiy
  2 siblings, 0 replies; 4+ messages in thread
From: Artem Bityutskiy @ 2014-09-17  8:42 UTC (permalink / raw)
  To: Richard Weinberger; +Cc: linux-mtd, linux-kernel

On Tue, 2014-09-16 at 15:26 +0200, Richard Weinberger wrote:
> I fail to see why we need work_sem while wrk->func() is executed.
> Artem, do you have an idea?
> 
> Having the wear_leveling_worker() called without work_sem held
> would simplify the fastmap code too. I'm currently reworking some
> of it's code and I'm in locking hell. 8-)

Well, the best way for getting clue about the meaning of an R/W
semaphore with an unlimited amount of read-takers I found for myself is
to focus on the write-takers. Read-takers are non-interesting, because
they can race freely.

So let's check write-takers.

There are 2 of them - one in your code, one in the one I wrote many
years ago.

"Mine" is in 'ubi_wl_flush()':

        down_write(&ubi->work_sem);
        up_write(&ubi->work_sem);

And the only reason it is there is to make sure that flush() really
flushes the queue, and when 'ubi_wl_flush()' returns, you may be sure
that all the in-flight works were finished.

There are other ways to achieve this, but I probably found using the R/W
semaphore to be the easiest. Indeed, just make all the works have it in
read mode, and when you have to wait for all the in-flight works to
complete, you take it in write mode - easy.

IOW, this is a bit of an unusual use of R/W semaphores.

HTH.

P.S. Generally, if you have a trouble with a lock, start with checking
the place where it is defined, I tried to document locks there briefly.
And there may be pices of useful comments elsewhere. This should be true
for both UBI and UBIFS. So just a general hint.

For 'work_sem' you'd need to check ubi.h. But unfortunately, the comment
there is not helpful, and even has a typo which makes it confusing.

While on it, would you refine the comment and say something like:

work_sem: used to wait for all the scheduled works to finish and prevent
new works from being submitted


-- 
Best Regards,
Artem Bityutskiy


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2014-09-17  8:43 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-16 13:26 [RFC] Don'd hold work_sem while calling worker functions Richard Weinberger
2014-09-16 13:26 ` [PATCH 1/2] UBI: Call worker functions without work_sem held Richard Weinberger
2014-09-16 13:26 ` [PATCH 2/2] UBI: Get rid of __schedule_ubi_work() Richard Weinberger
2014-09-17  8:42 ` [RFC] Don'd hold work_sem while calling worker functions Artem Bityutskiy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).