linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] ide task_map_rq() preempt count imbalance
@ 2003-09-03  5:52 Tejun Huh
  2003-09-03  6:40 ` Jens Axboe
  2003-09-03 15:53 ` Bartlomiej Zolnierkiewicz
  0 siblings, 2 replies; 5+ messages in thread
From: Tejun Huh @ 2003-09-03  5:52 UTC (permalink / raw)
  To: linux-kernel; +Cc: B.Zolnierkiewicz

 Hello,

 2.5 kernel continued to fail on my test machine with a lot of
scheduling while atomic messages.  The offending function was
include/linux/ide.h:task_sectors() which calls task_map_rq() followed
by process_that_request_first(), taskfile_{input|output}_data() and
task_unmap_rq().

static inline void *task_map_rq(struct request *rq, unsigned long *flags)
{
	/*
	 * fs request
	 */
	if (rq->cbio)
		return rq_map_buffer(rq, flags);

	/*
	 * task request
	 */
	return rq->buffer + blk_rq_offset(rq);
}

static inline void task_unmap_rq(struct request *rq, char *buffer, unsigned long *flags)
{
	if (rq->cbio)
		rq_unmap_buffer(buffer, flags);
}

 rq_[un]map_buffer() eventually call into k[un]map_atomic() which
adjust preempt_count().  The problem is that rq->cbio is cleared by
process_that_request_first() before calling task_unmap_rq(), so the
preempt_count() isn't decremented properly.

 As it seems that rq->cbio test is to avoid the cost of calling into
k[un]map_atomic, I just removed the optimization and it worked for me.

 I'm not familiar with ide and block layer.  If I got something wrong,
please point out.

-- 
tejun

# This is a BitKeeper generated patch for the following project:
# Project Name: Linux kernel tree
# This patch format is intended for GNU patch command version 2.5 or higher.
# This patch includes the following deltas:
#	           ChangeSet	1.1412  -> 1.1413 
#	 include/linux/ide.h	1.72    -> 1.73   
#
# The following is the BitKeeper ChangeSet Log
# --------------------------------------------
# 03/09/03	tj@atj.dyndns.org	1.1413
# - task_[un]map_rq() preempt count imbalance fixed.
# --------------------------------------------
#
diff -Nru a/include/linux/ide.h b/include/linux/ide.h
--- a/include/linux/ide.h	Wed Sep  3 14:50:20 2003
+++ b/include/linux/ide.h	Wed Sep  3 14:50:20 2003
@@ -853,22 +853,12 @@
 
 static inline void *task_map_rq(struct request *rq, unsigned long *flags)
 {
-	/*
-	 * fs request
-	 */
-	if (rq->cbio)
-		return rq_map_buffer(rq, flags);
-
-	/*
-	 * task request
-	 */
-	return rq->buffer + blk_rq_offset(rq);
+	return rq_map_buffer(rq, flags);
 }
 
 static inline void task_unmap_rq(struct request *rq, char *buffer, unsigned long *flags)
 {
-	if (rq->cbio)
-		rq_unmap_buffer(buffer, flags);
+	rq_unmap_buffer(buffer, flags);
 }
 
 #endif /* !CONFIG_IDE_TASKFILE_IO */

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] ide task_map_rq() preempt count imbalance
  2003-09-03  5:52 [PATCH] ide task_map_rq() preempt count imbalance Tejun Huh
@ 2003-09-03  6:40 ` Jens Axboe
  2003-09-03  7:00   ` Tejun Huh
  2003-09-03 15:53 ` Bartlomiej Zolnierkiewicz
  1 sibling, 1 reply; 5+ messages in thread
From: Jens Axboe @ 2003-09-03  6:40 UTC (permalink / raw)
  To: Tejun Huh; +Cc: linux-kernel, B.Zolnierkiewicz

On Wed, Sep 03 2003, Tejun Huh wrote:
>  Hello,
> 
>  2.5 kernel continued to fail on my test machine with a lot of
> scheduling while atomic messages.  The offending function was
> include/linux/ide.h:task_sectors() which calls task_map_rq() followed
> by process_that_request_first(), taskfile_{input|output}_data() and
> task_unmap_rq().
> 
> static inline void *task_map_rq(struct request *rq, unsigned long *flags)
> {
> 	/*
> 	 * fs request
> 	 */
> 	if (rq->cbio)
> 		return rq_map_buffer(rq, flags);
> 
> 	/*
> 	 * task request
> 	 */
> 	return rq->buffer + blk_rq_offset(rq);
> }
> 
> static inline void task_unmap_rq(struct request *rq, char *buffer, unsigned long *flags)
> {
> 	if (rq->cbio)
> 		rq_unmap_buffer(buffer, flags);
> }
> 
>  rq_[un]map_buffer() eventually call into k[un]map_atomic() which
> adjust preempt_count().  The problem is that rq->cbio is cleared by
> process_that_request_first() before calling task_unmap_rq(), so the
> preempt_count() isn't decremented properly.
> 
>  As it seems that rq->cbio test is to avoid the cost of calling into
> k[un]map_atomic, I just removed the optimization and it worked for me.
> 
>  I'm not familiar with ide and block layer.  If I got something wrong,
> please point out.
> 
> -- 
> tejun
> 
> # This is a BitKeeper generated patch for the following project:
> # Project Name: Linux kernel tree
> # This patch format is intended for GNU patch command version 2.5 or higher.
> # This patch includes the following deltas:
> #	           ChangeSet	1.1412  -> 1.1413 
> #	 include/linux/ide.h	1.72    -> 1.73   
> #
> # The following is the BitKeeper ChangeSet Log
> # --------------------------------------------
> # 03/09/03	tj@atj.dyndns.org	1.1413
> # - task_[un]map_rq() preempt count imbalance fixed.
> # --------------------------------------------
> #
> diff -Nru a/include/linux/ide.h b/include/linux/ide.h
> --- a/include/linux/ide.h	Wed Sep  3 14:50:20 2003
> +++ b/include/linux/ide.h	Wed Sep  3 14:50:20 2003
> @@ -853,22 +853,12 @@
>  
>  static inline void *task_map_rq(struct request *rq, unsigned long *flags)
>  {
> -	/*
> -	 * fs request
> -	 */
> -	if (rq->cbio)
> -		return rq_map_buffer(rq, flags);
> -
> -	/*
> -	 * task request
> -	 */
> -	return rq->buffer + blk_rq_offset(rq);
> +	return rq_map_buffer(rq, flags);
>  }
>  
>  static inline void task_unmap_rq(struct request *rq, char *buffer, unsigned long *flags)
>  {
> -	if (rq->cbio)
> -		rq_unmap_buffer(buffer, flags);
> +	rq_unmap_buffer(buffer, flags);
>  }
>  
>  #endif /* !CONFIG_IDE_TASKFILE_IO */

This doesn't work, it's perfectly legit to use task_map_rq() on a
non-bio backed request. You need to fix this differently.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] ide task_map_rq() preempt count imbalance
  2003-09-03  6:40 ` Jens Axboe
@ 2003-09-03  7:00   ` Tejun Huh
  0 siblings, 0 replies; 5+ messages in thread
From: Tejun Huh @ 2003-09-03  7:00 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Tejun Huh, linux-kernel, B.Zolnierkiewicz

On Wed, Sep 03, 2003 at 08:40:57AM +0200, Jens Axboe wrote:
> 
> This doesn't work, it's perfectly legit to use task_map_rq() on a
> non-bio backed request. You need to fix this differently.
> 

 I see.  rq->cbio is required by rq_map_buffer.  I have no idea how
this should be done.  Please ignore the patch and consider it as a bug
report.

-- 
tejun


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] ide task_map_rq() preempt count imbalance
  2003-09-03  5:52 [PATCH] ide task_map_rq() preempt count imbalance Tejun Huh
  2003-09-03  6:40 ` Jens Axboe
@ 2003-09-03 15:53 ` Bartlomiej Zolnierkiewicz
  2003-09-06  2:18   ` Tejun Huh
  1 sibling, 1 reply; 5+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2003-09-03 15:53 UTC (permalink / raw)
  To: Tejun Huh; +Cc: linux-kernel

On Wednesday 03 of September 2003 07:52, Tejun Huh wrote:
>  Hello,
>
>  2.5 kernel continued to fail on my test machine with a lot of
> scheduling while atomic messages.  The offending function was
> include/linux/ide.h:task_sectors() which calls task_map_rq() followed
> by process_that_request_first(), taskfile_{input|output}_data() and
> task_unmap_rq().
>
> static inline void *task_map_rq(struct request *rq, unsigned long *flags)
> {
> 	/*
> 	 * fs request
> 	 */
> 	if (rq->cbio)
> 		return rq_map_buffer(rq, flags);
>
> 	/*
> 	 * task request
> 	 */
> 	return rq->buffer + blk_rq_offset(rq);
> }
>
> static inline void task_unmap_rq(struct request *rq, char *buffer, unsigned
> long *flags) {
> 	if (rq->cbio)
> 		rq_unmap_buffer(buffer, flags);
> }
>
>  rq_[un]map_buffer() eventually call into k[un]map_atomic() which
> adjust preempt_count().  The problem is that rq->cbio is cleared by
> process_that_request_first() before calling task_unmap_rq(), so the
> preempt_count() isn't decremented properly.

Good catch.

>  As it seems that rq->cbio test is to avoid the cost of calling into
> k[un]map_atomic, I just removed the optimization and it worked for me.
>
>  I'm not familiar with ide and block layer.  If I got something wrong,
> please point out.

As Jens pointed out its not a proper fix.  Please try attached patch.

You are using PIO mode with "IDE taskfile IO" option enabled.
Please also check if this preempt count bug happens with taskfile IO
disabled (from quick look at the code it shouldn't but...).

Do you have any other IDE problems?
Do multi-sector PIO transfers with taskfile IO work for you (hdparm -m)?

--bartlomiej


ide: fix imbalance preempt count with taskfile PIO

Noticed by Tejun Huh <tejun@aratech.co.kr>.

 include/linux/ide.h |   39 +++++++++++++--------------------------
 1 files changed, 13 insertions(+), 26 deletions(-)

diff -puN include/linux/ide.h~ide-tf-pio-preempt-fix include/linux/ide.h
--- linux-2.6.0-test4-bk3/include/linux/ide.h~ide-tf-pio-preempt-fix	2003-09-03 17:50:15.974731504 +0200
+++ linux-2.6.0-test4-bk3-root/include/linux/ide.h	2003-09-03 17:50:15.978730896 +0200
@@ -850,29 +850,6 @@ static inline void ide_unmap_buffer(stru
 	if (rq->bio)
 		bio_kunmap_irq(buffer, flags);
 }
-
-#else /* !CONFIG_IDE_TASKFILE_IO */
-
-static inline void *task_map_rq(struct request *rq, unsigned long *flags)
-{
-	/*
-	 * fs request
-	 */
-	if (rq->cbio)
-		return rq_map_buffer(rq, flags);
-
-	/*
-	 * task request
-	 */
-	return rq->buffer + blk_rq_offset(rq);
-}
-
-static inline void task_unmap_rq(struct request *rq, char *buffer, unsigned long *flags)
-{
-	if (rq->cbio)
-		rq_unmap_buffer(buffer, flags);
-}
-
 #endif /* !CONFIG_IDE_TASKFILE_IO */
 
 #define IDE_CHIPSET_PCI_MASK	\
@@ -1471,9 +1448,19 @@ static inline void task_sectors(ide_driv
 				unsigned nsect, int rw)
 {
 	unsigned long flags;
+	unsigned int bio_rq;
 	char *buf;
 
-	buf = task_map_rq(rq, &flags);
+	/*
+	 * bio_rq flag is needed because we can call
+	 * rq_unmap_buffer() with rq->cbio == NULL
+	 */
+	bio_rq = rq->cbio ? 1 : 0;
+
+	if (bio_rq)
+		buf = rq_map_buffer(rq, &flags);	/* fs request */
+	else
+		buf = rq->buffer + blk_rq_offset(rq);	/* task request */
 
 	/*
 	 * IRQ can happen instantly after reading/writing
@@ -1486,9 +1473,9 @@ static inline void task_sectors(ide_driv
 	else
 		taskfile_input_data(drive, buf, nsect * SECTOR_WORDS);
 
-	task_unmap_rq(rq, buf, &flags);
+	if (bio_rq)
+		rq_unmap_buffer(buf, &flags);
 }
-
 #endif /* CONFIG_IDE_TASKFILE_IO */
 
 extern int drive_is_ready(ide_drive_t *);

_



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] ide task_map_rq() preempt count imbalance
  2003-09-03 15:53 ` Bartlomiej Zolnierkiewicz
@ 2003-09-06  2:18   ` Tejun Huh
  0 siblings, 0 replies; 5+ messages in thread
From: Tejun Huh @ 2003-09-06  2:18 UTC (permalink / raw)
  To: Bartlomiej Zolnierkiewicz; +Cc: Tejun Huh, linux-kernel

On Wed, Sep 03, 2003 at 05:53:04PM +0200, Bartlomiej Zolnierkiewicz wrote:
> 
> As Jens pointed out its not a proper fix.  Please try attached patch.
> 
> You are using PIO mode with "IDE taskfile IO" option enabled.
> Please also check if this preempt count bug happens with taskfile IO
> disabled (from quick look at the code it shouldn't but...).
> 
> Do you have any other IDE problems?
> Do multi-sector PIO transfers with taskfile IO work for you (hdparm -m)?

 Hello Bartlomiej,

 Sorry for late response.  Your patch works perfectly.  Disabling
taskfile doesn't hurt anything and multi-sector is turned on by
default(16) and seems OK.  Thanks.

-- 
tejun


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2003-09-06  2:18 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-09-03  5:52 [PATCH] ide task_map_rq() preempt count imbalance Tejun Huh
2003-09-03  6:40 ` Jens Axboe
2003-09-03  7:00   ` Tejun Huh
2003-09-03 15:53 ` Bartlomiej Zolnierkiewicz
2003-09-06  2:18   ` Tejun Huh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).