All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH][2.6-mm] Avoid flushing AIO workqueue on cancel/exit
@ 2003-10-21 10:25 Suparna Bhattacharya
  2003-10-21 10:59 ` Andrew Morton
  0 siblings, 1 reply; 3+ messages in thread
From: Suparna Bhattacharya @ 2003-10-21 10:25 UTC (permalink / raw)
  To: akpm; +Cc: linux-aio, linux-kernel

When streaming AIO requests are in progress on multiple
io context's, flushing the AIO workqueue on i/o cancellation
or process exit could potentially end up waiting for a 
long time as fresh requests from other active ioctx's keep 
getting queued up. I haven't run into this in practice, 
but it looks like a problem.

Instead, this patch increments the ioctx reference count
before queueing up a retry, ensuring that the ioctx 
doesn't go away until the workqueue handler is done with
processing the ioctx and releases its reference to it.
Note: This does mean that exit_mmap can happen before the 
last reference to the ioctx is gone, but only after all 
iocbs have completed using the caller's mm.

Regards
Suparna

-- 
Suparna Bhattacharya (suparna@in.ibm.com)
Linux Technology Center
IBM Software Labs, India

diffstat aio-undo-flushworkqueue.patch

 aio.c |   28 ++++++++++++----------------
 1 files changed, 12 insertions(+), 16 deletions(-)

diff -urp linux-2.6.0-test6-mm4/fs/aio.c 260t6mm4/fs/aio.c
--- linux-2.6.0-test6-mm4/fs/aio.c	Tue Oct 21 19:20:50 2003
+++ 260t6mm4/fs/aio.c	Tue Oct 21 19:09:26 2003
@@ -63,6 +63,12 @@ LIST_HEAD(fput_head);
 
 static void aio_kick_handler(void *);
 
+static inline void queue_ctx_work(struct kioctx *ctx)
+{
+	get_ioctx(ctx);
+	queue_work(aio_wq, &ctx->wq);
+}
+
 /* aio_setup
  *	Creates the slab caches used by the aio routines, panic on
  *	failure as this is done early during the boot sequence.
@@ -349,15 +355,9 @@ void exit_aio(struct mm_struct *mm)
 		aio_cancel_all(ctx);
 
 		wait_for_all_aios(ctx);
-		/*
-		 * this is an overkill, but ensures we don't leave
-		 * the ctx on the aio_wq
-		 */
-		flush_workqueue(aio_wq);
 
 		if (1 != atomic_read(&ctx->users))
-			printk(KERN_DEBUG
-				"exit_aio:ioctx still alive: %d %d %d\n",
+			pr_debug("exit_aio:ioctx still alive: %d %d %d\n",
 				atomic_read(&ctx->users), ctx->dead,
 				ctx->reqs_active);
 		put_ioctx(ctx);
@@ -807,7 +812,7 @@ static inline void aio_run_iocbs(struct 
 	requeue = __aio_run_iocbs(ctx);
 	spin_unlock_irq(&ctx->ctx_lock);
 	if (requeue)
-		queue_work(aio_wq, &ctx->wq);
+		queue_ctx_work(ctx);
 }
 
 /*
@@ -833,7 +838,8 @@ static void aio_kick_handler(void *data)
 	spin_unlock_irq(&ctx->ctx_lock);
 	set_fs(oldfs);
 	if (requeue)
-		queue_work(aio_wq, &ctx->wq);
+		queue_ctx_work(ctx);
+	put_ioctx(ctx);
 }
 
 
@@ -854,7 +860,7 @@ void queue_kicked_iocb(struct kiocb *ioc
 	run = __queue_kicked_iocb(iocb);
 	spin_unlock_irqrestore(&ctx->ctx_lock, flags);
 	if (run) {
-		queue_work(aio_wq, &ctx->wq);
+		queue_ctx_work(ctx);
 		aio_wakeups++;
 	}
 }
@@ -1201,11 +1207,6 @@ static void io_destroy(struct kioctx *io
 
 	aio_cancel_all(ioctx);
 	wait_for_all_aios(ioctx);
-	/*
-	 * this is an overkill, but ensures we don't leave
-	 * the ctx on the aio_wq
-	 */
-	flush_workqueue(aio_wq);
 	put_ioctx(ioctx);	/* once for the lookup */
 }
 
@@ -1524,7 +1525,7 @@ int io_submit_one(struct kioctx *ctx, st
 	spin_unlock_irq(&ctx->ctx_lock);
 
 	if (-EIOCBRETRY == ret)
-		queue_work(aio_wq, &ctx->wq);
+		queue_ctx_work(ctx);
 
 	if (need_putctx)
 		put_ioctx(ctx);

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH][2.6-mm] Avoid flushing AIO workqueue on cancel/exit
  2003-10-21 10:25 [PATCH][2.6-mm] Avoid flushing AIO workqueue on cancel/exit Suparna Bhattacharya
@ 2003-10-21 10:59 ` Andrew Morton
  2003-10-21 11:27   ` Suparna Bhattacharya
  0 siblings, 1 reply; 3+ messages in thread
From: Andrew Morton @ 2003-10-21 10:59 UTC (permalink / raw)
  To: suparna; +Cc: linux-aio, linux-kernel

Suparna Bhattacharya <suparna@in.ibm.com> wrote:
>
> When streaming AIO requests are in progress on multiple
>  io context's, flushing the AIO workqueue on i/o cancellation
>  or process exit could potentially end up waiting for a 
>  long time as fresh requests from other active ioctx's keep 
>  getting queued up.

But flush_workqueue() will ignore any newly-added work requests:

 * This function will sample each workqueue's current insert_sequence number and
 * will sleep until the head sequence is greater than or equal to that.  This
 * means that we sleep until all works which were queued on entry have been
 * handled, but we are not livelocked by new incoming ones.

Now, flush_workqueue() is potentially inefficient on SMP because it flushes
each CPU's workqueue sequentially.  But we can fix that in
flush_workqueue() by converting it to a two-pass approach:

a) gather each CPU's insert_sequence number into a local array[NR_CPUS]

b) wait until each CPU's remove_sequence number exceeds the previously-gathered
   insert_sequence number.



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH][2.6-mm] Avoid flushing AIO workqueue on cancel/exit
  2003-10-21 10:59 ` Andrew Morton
@ 2003-10-21 11:27   ` Suparna Bhattacharya
  0 siblings, 0 replies; 3+ messages in thread
From: Suparna Bhattacharya @ 2003-10-21 11:27 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-aio, linux-kernel

On Tue, Oct 21, 2003 at 03:59:00AM -0700, Andrew Morton wrote:
> Suparna Bhattacharya <suparna@in.ibm.com> wrote:
> >
> > When streaming AIO requests are in progress on multiple
> >  io context's, flushing the AIO workqueue on i/o cancellation
> >  or process exit could potentially end up waiting for a 
> >  long time as fresh requests from other active ioctx's keep 
> >  getting queued up.
> 
> But flush_workqueue() will ignore any newly-added work requests:
> 
>  * This function will sample each workqueue's current insert_sequence number and
>  * will sleep until the head sequence is greater than or equal to that.  This
>  * means that we sleep until all works which were queued on entry have been
>  * handled, but we are not livelocked by new incoming ones.

True. I must have had an old version of workqueue.c in mind ... which
had a warning of that sort; I didn't realise that had been fixed.

Now that we have included the patch to splice the AIO runlist, 
newer requests getting queued up on the ioctx shouldn't livelock us. 

So yes this isn't strictly necessary. We still have one ioctx cancellation
waiting for ongoing retries on other ioctx's system wide to get done,
instead of just the io context being cancelled, but that's probably
not so bad. We can wait till that shows up as a real problem.

> 
> Now, flush_workqueue() is potentially inefficient on SMP because it flushes
> each CPU's workqueue sequentially.  But we can fix that in
> flush_workqueue() by converting it to a two-pass approach:
> 
> a) gather each CPU's insert_sequence number into a local array[NR_CPUS]
> 
> b) wait until each CPU's remove_sequence number exceeds the previously-gathered
>    insert_sequence number.
> 

Regards
Suparna

> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-aio' in
> the body to majordomo@kvack.org.  For more info on Linux AIO,
> see: http://www.kvack.org/aio/
> Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>

-- 
Suparna Bhattacharya (suparna@in.ibm.com)
Linux Technology Center
IBM Software Labs, India


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2003-10-21 11:21 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-10-21 10:25 [PATCH][2.6-mm] Avoid flushing AIO workqueue on cancel/exit Suparna Bhattacharya
2003-10-21 10:59 ` Andrew Morton
2003-10-21 11:27   ` Suparna Bhattacharya

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.