All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH][2.6-mm] Avoid flushing AIO workqueue on cancel/exit
@ 2003-10-21 10:25 Suparna Bhattacharya
  2003-10-21 10:59 ` Andrew Morton
  0 siblings, 1 reply; 3+ messages in thread
From: Suparna Bhattacharya @ 2003-10-21 10:25 UTC (permalink / raw)
  To: akpm; +Cc: linux-aio, linux-kernel

When streaming AIO requests are in progress on multiple
io context's, flushing the AIO workqueue on i/o cancellation
or process exit could potentially end up waiting for a 
long time as fresh requests from other active ioctx's keep 
getting queued up. I haven't run into this in practice, 
but it looks like a problem.

Instead, this patch increments the ioctx reference count
before queueing up a retry, ensuring that the ioctx 
doesn't go away until the workqueue handler is done with
processing the ioctx and releases its reference to it.
Note: This does mean that exit_mmap can happen before the 
last reference to the ioctx is gone, but only after all 
iocbs have completed using the caller's mm.

Regards
Suparna

-- 
Suparna Bhattacharya (suparna@in.ibm.com)
Linux Technology Center
IBM Software Labs, India

diffstat aio-undo-flushworkqueue.patch

 aio.c |   28 ++++++++++++----------------
 1 files changed, 12 insertions(+), 16 deletions(-)

diff -urp linux-2.6.0-test6-mm4/fs/aio.c 260t6mm4/fs/aio.c
--- linux-2.6.0-test6-mm4/fs/aio.c	Tue Oct 21 19:20:50 2003
+++ 260t6mm4/fs/aio.c	Tue Oct 21 19:09:26 2003
@@ -63,6 +63,12 @@ LIST_HEAD(fput_head);
 
 static void aio_kick_handler(void *);
 
+static inline void queue_ctx_work(struct kioctx *ctx)
+{
+	get_ioctx(ctx);
+	queue_work(aio_wq, &ctx->wq);
+}
+
 /* aio_setup
  *	Creates the slab caches used by the aio routines, panic on
  *	failure as this is done early during the boot sequence.
@@ -349,15 +355,9 @@ void exit_aio(struct mm_struct *mm)
 		aio_cancel_all(ctx);
 
 		wait_for_all_aios(ctx);
-		/*
-		 * this is an overkill, but ensures we don't leave
-		 * the ctx on the aio_wq
-		 */
-		flush_workqueue(aio_wq);
 
 		if (1 != atomic_read(&ctx->users))
-			printk(KERN_DEBUG
-				"exit_aio:ioctx still alive: %d %d %d\n",
+			pr_debug("exit_aio:ioctx still alive: %d %d %d\n",
 				atomic_read(&ctx->users), ctx->dead,
 				ctx->reqs_active);
 		put_ioctx(ctx);
@@ -807,7 +812,7 @@ static inline void aio_run_iocbs(struct 
 	requeue = __aio_run_iocbs(ctx);
 	spin_unlock_irq(&ctx->ctx_lock);
 	if (requeue)
-		queue_work(aio_wq, &ctx->wq);
+		queue_ctx_work(ctx);
 }
 
 /*
@@ -833,7 +838,8 @@ static void aio_kick_handler(void *data)
 	spin_unlock_irq(&ctx->ctx_lock);
 	set_fs(oldfs);
 	if (requeue)
-		queue_work(aio_wq, &ctx->wq);
+		queue_ctx_work(ctx);
+	put_ioctx(ctx);
 }
 
 
@@ -854,7 +860,7 @@ void queue_kicked_iocb(struct kiocb *ioc
 	run = __queue_kicked_iocb(iocb);
 	spin_unlock_irqrestore(&ctx->ctx_lock, flags);
 	if (run) {
-		queue_work(aio_wq, &ctx->wq);
+		queue_ctx_work(ctx);
 		aio_wakeups++;
 	}
 }
@@ -1201,11 +1207,6 @@ static void io_destroy(struct kioctx *io
 
 	aio_cancel_all(ioctx);
 	wait_for_all_aios(ioctx);
-	/*
-	 * this is an overkill, but ensures we don't leave
-	 * the ctx on the aio_wq
-	 */
-	flush_workqueue(aio_wq);
 	put_ioctx(ioctx);	/* once for the lookup */
 }
 
@@ -1524,7 +1525,7 @@ int io_submit_one(struct kioctx *ctx, st
 	spin_unlock_irq(&ctx->ctx_lock);
 
 	if (-EIOCBRETRY == ret)
-		queue_work(aio_wq, &ctx->wq);
+		queue_ctx_work(ctx);
 
 	if (need_putctx)
 		put_ioctx(ctx);

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2003-10-21 11:21 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-10-21 10:25 [PATCH][2.6-mm] Avoid flushing AIO workqueue on cancel/exit Suparna Bhattacharya
2003-10-21 10:59 ` Andrew Morton
2003-10-21 11:27   ` Suparna Bhattacharya

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.