All of lore.kernel.org
 help / color / mirror / Atom feed
* I/O and pdflush
@ 2009-07-11 17:27 Fernando Silveira
  2009-07-12  8:04 ` Wu Fengguang
  0 siblings, 1 reply; 16+ messages in thread
From: Fernando Silveira @ 2009-07-11 17:27 UTC (permalink / raw)
  To: linux-kernel

Hi.

I'm having a hard time with an application that writes sequentially
250GB of non-stop data directly to a solid state disk (OCZ SSD CORE
v2) device and I hope you can help me. The command "dd if=/dev/zero
of=/dev/sdc bs=4M" reproduces the same symptoms I'm having and writes
exactly as that application does.

The problem is that after some time of data writing at 70MB/s, it
eventually falls down to about 25MB/s and does not get up again until
a loooong time has passed (from 1 to 30 minutes). This happens much
more often when "vm.dirty_*" settings are default (30 secs to expire,
5 secs for writeback, 10% and 40% for background and normal ratio),
and when I set them to 1 second or even 0, the problem happens much
less often and the sticking period of 25MB/s is much lower.

In one of my experiences, I could see that writing some blocks of of
data (aprox. 48 blocks of 4MB each time) at a random position of the
"disk" increases the chances of decreasing the writing rate to 25MB/s.
You can see at this graph[1] that after the 7th random big write (at
66 GB) it falls down to 25MB/s. The writes happened at the following
positions (in GB): 10, 20, 30, 39, 48, 57, 66, 73, 80, 90, 100, 109,
118, 128, 137, 147, and 156 GB.

As I don't know much about kernel internals, IMHO it might be the SSD
might be "hiccuping" and some kind of kernel I/O scheduler or pdflush
decreases its rate to avoid write errors, I don't know.

Could somebody tell me how could I debug the kernel and any of its
modules to understand exactly why the writing is behaving this way?
Maybe I could do it just by logging write errors or something, I don't
know. Telling me which part I should start analyzing would be a huge
hint, seriously.

Thanks.

1. http://rootshell.be/~swrh/ssd-tests/ssd-no_dirty_buffer_with_random_192mb_writes.png

PS: This is used with two A/D converters which provide 25MB/s of data
each, leading my writing software to need at least 50MB/s of
sequential writing rate.

-- 
Fernando Silveira <fsilveira@gmail.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: I/O and pdflush
  2009-07-11 17:27 I/O and pdflush Fernando Silveira
@ 2009-07-12  8:04 ` Wu Fengguang
  2009-08-28 21:48   ` Fernando Silveira
  0 siblings, 1 reply; 16+ messages in thread
From: Wu Fengguang @ 2009-07-12  8:04 UTC (permalink / raw)
  To: Fernando Silveira; +Cc: linux-kernel

On Sat, Jul 11, 2009 at 02:27:25PM -0300, Fernando Silveira wrote:
> Hi.
> 
> I'm having a hard time with an application that writes sequentially
> 250GB of non-stop data directly to a solid state disk (OCZ SSD CORE
> v2) device and I hope you can help me. The command "dd if=/dev/zero
> of=/dev/sdc bs=4M" reproduces the same symptoms I'm having and writes
> exactly as that application does.
> 
> The problem is that after some time of data writing at 70MB/s, it
> eventually falls down to about 25MB/s and does not get up again until
> a loooong time has passed (from 1 to 30 minutes). This happens much
> more often when "vm.dirty_*" settings are default (30 secs to expire,
> 5 secs for writeback, 10% and 40% for background and normal ratio),
> and when I set them to 1 second or even 0, the problem happens much
> less often and the sticking period of 25MB/s is much lower.
> 
> In one of my experiences, I could see that writing some blocks of of
> data (aprox. 48 blocks of 4MB each time) at a random position of the
> "disk" increases the chances of decreasing the writing rate to 25MB/s.
> You can see at this graph[1] that after the 7th random big write (at
> 66 GB) it falls down to 25MB/s. The writes happened at the following
> positions (in GB): 10, 20, 30, 39, 48, 57, 66, 73, 80, 90, 100, 109,
> 118, 128, 137, 147, and 156 GB.
> 
> As I don't know much about kernel internals, IMHO it might be the SSD
> might be "hiccuping" and some kind of kernel I/O scheduler or pdflush
> decreases its rate to avoid write errors, I don't know.
> 
> Could somebody tell me how could I debug the kernel and any of its
> modules to understand exactly why the writing is behaving this way?
> Maybe I could do it just by logging write errors or something, I don't
> know. Telling me which part I should start analyzing would be a huge
> hint, seriously.
> 
> Thanks.
> 
> 1. http://rootshell.be/~swrh/ssd-tests/ssd-no_dirty_buffer_with_random_192mb_writes.png
> 
> PS: This is used with two A/D converters which provide 25MB/s of data
> each, leading my writing software to need at least 50MB/s of
> sequential writing rate.

Hi Fernando,

What's your kernel version?  Can the following patch help?

Thanks,
Fengguang

---
 fs/fs-writeback.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- mm.orig/fs/fs-writeback.c
+++ mm/fs/fs-writeback.c
@@ -325,7 +325,8 @@ __sync_single_inode(struct inode *inode,
 				 * soon as the queue becomes uncongested.
 				 */
 				inode->i_state |= I_DIRTY_PAGES;
-				if (wbc->nr_to_write <= 0) {
+				if (wbc->nr_to_write <= 0 ||
+				    wbc->encountered_congestion) {
 					/*
 					 * slice used up: queue for next turn
 					 */

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: I/O and pdflush
  2009-07-12  8:04 ` Wu Fengguang
@ 2009-08-28 21:48   ` Fernando Silveira
  2009-08-29 10:12     ` Wu Fengguang
  0 siblings, 1 reply; 16+ messages in thread
From: Fernando Silveira @ 2009-08-28 21:48 UTC (permalink / raw)
  To: Wu Fengguang; +Cc: linux-kernel

On Sun, Jul 12, 2009 at 05:04, Wu Fengguang<fengguang.wu@intel.com> wrote:
> On Sat, Jul 11, 2009 at 02:27:25PM -0300, Fernando Silveira wrote:
>> I'm having a hard time with an application that writes sequentially
>> 250GB of non-stop data directly to a solid state disk (OCZ SSD CORE
>> v2) device and I hope you can help me. The command "dd if=/dev/zero
>> of=/dev/sdc bs=4M" reproduces the same symptoms I'm having and writes
>> exactly as that application does.
>
> What's your kernel version?  Can the following patch help?

Sorry for the delay, but I could not test it before.

Unfortunately it did not help or change any symptoms at all. The
kernel version I tested was a non-patched 2.6.30.1 version.

Do you have any other hint?

-- 
Fernando Silveira <fsilveira@gmail.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: I/O and pdflush
  2009-08-28 21:48   ` Fernando Silveira
@ 2009-08-29 10:12     ` Wu Fengguang
  2009-08-29 10:21       ` Wu Fengguang
  0 siblings, 1 reply; 16+ messages in thread
From: Wu Fengguang @ 2009-08-29 10:12 UTC (permalink / raw)
  To: Fernando Silveira; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1259 bytes --]

On Sat, Aug 29, 2009 at 05:48:40AM +0800, Fernando Silveira wrote:
> On Sun, Jul 12, 2009 at 05:04, Wu Fengguang<fengguang.wu@intel.com> wrote:
> > On Sat, Jul 11, 2009 at 02:27:25PM -0300, Fernando Silveira wrote:
> >> I'm having a hard time with an application that writes sequentially
> >> 250GB of non-stop data directly to a solid state disk (OCZ SSD CORE
> >> v2) device and I hope you can help me. The command "dd if=/dev/zero
> >> of=/dev/sdc bs=4M" reproduces the same symptoms I'm having and writes
> >> exactly as that application does.
> >
> > What's your kernel version?  Can the following patch help?
> 
> Sorry for the delay, but I could not test it before.
> 
> Unfortunately it did not help or change any symptoms at all. The
> kernel version I tested was a non-patched 2.6.30.1 version.
> 
> Do you have any other hint?

Sure. Attached is a writeback debug patch. It will generate lots of
kernel messages. You can just stop your klogd, start your workload
and monitor the SSD writeback throughput via tools like iostat/dstat.
When it goes into the suboptimal 25MB/s state for several seconds, run
        dmesg > dmesg-writeback
and send me the log.

And it is advised to turn on the kconfig option CONFIG_PRINTK_TIME=y.

Thanks,
Fengguang

[-- Attachment #2: writeback-debug.patch --]
[-- Type: text/x-diff, Size: 2685 bytes --]

 mm/page-writeback.c |   38 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

--- mm.orig/mm/page-writeback.c
+++ mm/mm/page-writeback.c
@@ -116,6 +116,33 @@ EXPORT_SYMBOL(laptop_mode);
 
 /* End of sysctl-exported parameters */
 
+#define writeback_debug_report(n, wbc) do {                               \
+	__writeback_debug_report(n, wbc, __FILE__, __LINE__, __FUNCTION__); \
+} while (0)
+
+void print_writeback_control(struct writeback_control *wbc)
+{
+	printk(KERN_DEBUG
+			"global dirty %lu writeback %lu nfs %lu "
+			"flags %c%c towrite %ld skipped %ld\n",
+			global_page_state(NR_FILE_DIRTY),
+			global_page_state(NR_WRITEBACK),
+			global_page_state(NR_UNSTABLE_NFS),
+			wbc->encountered_congestion ? 'C':'_',
+			wbc->more_io ? 'M':'_',
+			wbc->nr_to_write,
+			wbc->pages_skipped);
+}
+
+void __writeback_debug_report(long n, struct writeback_control *wbc,
+		const char *file, int line, const char *func)
+{
+	printk(KERN_DEBUG "%s %d %s: %s(%d) %ld\n",
+			file, line, func,
+			current->comm, current->pid,
+			n);
+	print_writeback_control(wbc);
+}
 
 static void background_writeout(unsigned long _min_pages);
 
@@ -546,6 +573,7 @@ static void balance_dirty_pages(struct a
 			pages_written += write_chunk - wbc.nr_to_write;
 			get_dirty_limits(&background_thresh, &dirty_thresh,
 				       &bdi_thresh, bdi);
+			writeback_debug_report(pages_written, &wbc);
 		}
 
 		/*
@@ -572,6 +600,7 @@ static void balance_dirty_pages(struct a
 			break;		/* We've done our duty */
 
 		congestion_wait(WRITE, HZ/10);
+		writeback_debug_report(-pages_written, &wbc);
 	}
 
 	if (bdi_nr_reclaimable + bdi_nr_writeback < bdi_thresh &&
@@ -666,6 +695,11 @@ void throttle_vm_writeout(gfp_t gfp_mask
 			global_page_state(NR_WRITEBACK) <= dirty_thresh)
                         	break;
                 congestion_wait(WRITE, HZ/10);
+		printk(KERN_DEBUG "throttle_vm_writeout: "
+				"congestion_wait on %lu+%lu > %lu\n",
+				global_page_state(NR_UNSTABLE_NFS),
+				global_page_state(NR_WRITEBACK),
+				dirty_thresh);
 
 		/*
 		 * The caller might hold locks which can prevent IO completion
@@ -715,7 +749,9 @@ static void background_writeout(unsigned
 			else
 				break;
 		}
+		writeback_debug_report(min_pages, &wbc);
 	}
+	writeback_debug_report(min_pages, &wbc);
 }
 
 /*
@@ -788,7 +824,9 @@ static void wb_kupdate(unsigned long arg
 				break;	/* All the old data is written */
 		}
 		nr_to_write -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
+		writeback_debug_report(nr_to_write, &wbc);
 	}
+	writeback_debug_report(nr_to_write, &wbc);
 	if (time_before(next_jif, jiffies + HZ))
 		next_jif = jiffies + HZ;
 	if (dirty_writeback_interval)

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: I/O and pdflush
  2009-08-29 10:12     ` Wu Fengguang
@ 2009-08-29 10:21       ` Wu Fengguang
  2009-08-31 13:24         ` Fernando Silveira
  0 siblings, 1 reply; 16+ messages in thread
From: Wu Fengguang @ 2009-08-29 10:21 UTC (permalink / raw)
  To: Fernando Silveira; +Cc: linux-kernel

On Sat, Aug 29, 2009 at 06:12:47PM +0800, Wu Fengguang wrote:
> On Sat, Aug 29, 2009 at 05:48:40AM +0800, Fernando Silveira wrote:
> > On Sun, Jul 12, 2009 at 05:04, Wu Fengguang<fengguang.wu@intel.com> wrote:
> > > On Sat, Jul 11, 2009 at 02:27:25PM -0300, Fernando Silveira wrote:
> > >> I'm having a hard time with an application that writes sequentially
> > >> 250GB of non-stop data directly to a solid state disk (OCZ SSD CORE
> > >> v2) device and I hope you can help me. The command "dd if=/dev/zero
> > >> of=/dev/sdc bs=4M" reproduces the same symptoms I'm having and writes
> > >> exactly as that application does.
> > >
> > > What's your kernel version?  Can the following patch help?
> > 
> > Sorry for the delay, but I could not test it before.
> > 
> > Unfortunately it did not help or change any symptoms at all. The
> > kernel version I tested was a non-patched 2.6.30.1 version.
> > 
> > Do you have any other hint?
> 
> Sure. Attached is a writeback debug patch. It will generate lots of
> kernel messages. You can just stop your klogd, start your workload
> and monitor the SSD writeback throughput via tools like iostat/dstat.

Fernando, will you post the output of iostat/dstat too? Thanks!
The ssd-no_dirty_buffer_with_random_192mb_writes.png is a good overview,
however I'd like to also check out the numbers for each second :)

Thanks,
Fengguang

> When it goes into the suboptimal 25MB/s state for several seconds, run
>         dmesg > dmesg-writeback
> and send me the log.
> 
> And it is advised to turn on the kconfig option CONFIG_PRINTK_TIME=y.
> 
> Thanks,
> Fengguang

>  mm/page-writeback.c |   38 ++++++++++++++++++++++++++++++++++++++
>  1 file changed, 38 insertions(+)
> 
> --- mm.orig/mm/page-writeback.c
> +++ mm/mm/page-writeback.c
> @@ -116,6 +116,33 @@ EXPORT_SYMBOL(laptop_mode);
>  
>  /* End of sysctl-exported parameters */
>  
> +#define writeback_debug_report(n, wbc) do {                               \
> +	__writeback_debug_report(n, wbc, __FILE__, __LINE__, __FUNCTION__); \
> +} while (0)
> +
> +void print_writeback_control(struct writeback_control *wbc)
> +{
> +	printk(KERN_DEBUG
> +			"global dirty %lu writeback %lu nfs %lu "
> +			"flags %c%c towrite %ld skipped %ld\n",
> +			global_page_state(NR_FILE_DIRTY),
> +			global_page_state(NR_WRITEBACK),
> +			global_page_state(NR_UNSTABLE_NFS),
> +			wbc->encountered_congestion ? 'C':'_',
> +			wbc->more_io ? 'M':'_',
> +			wbc->nr_to_write,
> +			wbc->pages_skipped);
> +}
> +
> +void __writeback_debug_report(long n, struct writeback_control *wbc,
> +		const char *file, int line, const char *func)
> +{
> +	printk(KERN_DEBUG "%s %d %s: %s(%d) %ld\n",
> +			file, line, func,
> +			current->comm, current->pid,
> +			n);
> +	print_writeback_control(wbc);
> +}
>  
>  static void background_writeout(unsigned long _min_pages);
>  
> @@ -546,6 +573,7 @@ static void balance_dirty_pages(struct a
>  			pages_written += write_chunk - wbc.nr_to_write;
>  			get_dirty_limits(&background_thresh, &dirty_thresh,
>  				       &bdi_thresh, bdi);
> +			writeback_debug_report(pages_written, &wbc);
>  		}
>  
>  		/*
> @@ -572,6 +600,7 @@ static void balance_dirty_pages(struct a
>  			break;		/* We've done our duty */
>  
>  		congestion_wait(WRITE, HZ/10);
> +		writeback_debug_report(-pages_written, &wbc);
>  	}
>  
>  	if (bdi_nr_reclaimable + bdi_nr_writeback < bdi_thresh &&
> @@ -666,6 +695,11 @@ void throttle_vm_writeout(gfp_t gfp_mask
>  			global_page_state(NR_WRITEBACK) <= dirty_thresh)
>                          	break;
>                  congestion_wait(WRITE, HZ/10);
> +		printk(KERN_DEBUG "throttle_vm_writeout: "
> +				"congestion_wait on %lu+%lu > %lu\n",
> +				global_page_state(NR_UNSTABLE_NFS),
> +				global_page_state(NR_WRITEBACK),
> +				dirty_thresh);
>  
>  		/*
>  		 * The caller might hold locks which can prevent IO completion
> @@ -715,7 +749,9 @@ static void background_writeout(unsigned
>  			else
>  				break;
>  		}
> +		writeback_debug_report(min_pages, &wbc);
>  	}
> +	writeback_debug_report(min_pages, &wbc);
>  }
>  
>  /*
> @@ -788,7 +824,9 @@ static void wb_kupdate(unsigned long arg
>  				break;	/* All the old data is written */
>  		}
>  		nr_to_write -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
> +		writeback_debug_report(nr_to_write, &wbc);
>  	}
> +	writeback_debug_report(nr_to_write, &wbc);
>  	if (time_before(next_jif, jiffies + HZ))
>  		next_jif = jiffies + HZ;
>  	if (dirty_writeback_interval)


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: I/O and pdflush
  2009-08-29 10:21       ` Wu Fengguang
@ 2009-08-31 13:24         ` Fernando Silveira
  2009-08-31 14:00           ` Wu Fengguang
  0 siblings, 1 reply; 16+ messages in thread
From: Fernando Silveira @ 2009-08-31 13:24 UTC (permalink / raw)
  To: Wu Fengguang; +Cc: linux-kernel

On Sat, Aug 29, 2009 at 07:21, Wu Fengguang<fengguang.wu@intel.com> wrote:
> On Sat, Aug 29, 2009 at 06:12:47PM +0800, Wu Fengguang wrote:
>> On Sat, Aug 29, 2009 at 05:48:40AM +0800, Fernando Silveira wrote:
>> > On Sun, Jul 12, 2009 at 05:04, Wu Fengguang<fengguang.wu@intel.com> wrote:
>> > > On Sat, Jul 11, 2009 at 02:27:25PM -0300, Fernando Silveira wrote:
>> > >> I'm having a hard time with an application that writes sequentially
>> > >> 250GB of non-stop data directly to a solid state disk (OCZ SSD CORE
>> > >> v2) device and I hope you can help me. The command "dd if=/dev/zero
>> > >> of=/dev/sdc bs=4M" reproduces the same symptoms I'm having and writes
>> > >> exactly as that application does.
>>
>> Sure. Attached is a writeback debug patch. It will generate lots of
>> kernel messages. You can just stop your klogd, start your workload
>> and monitor the SSD writeback throughput via tools like iostat/dstat.
>
> Fernando, will you post the output of iostat/dstat too? Thanks!
> The ssd-no_dirty_buffer_with_random_192mb_writes.png is a good overview,
> however I'd like to also check out the numbers for each second :)
>
>> When it goes into the suboptimal 25MB/s state for several seconds, run
>>         dmesg > dmesg-writeback
>> and send me the log.
>>
>> And it is advised to turn on the kconfig option CONFIG_PRINTK_TIME=y.

Thanks for your help.

You can reach in the URL below a tarball which contains the data I
logged after patching the kernel with only your last patch.

http://rootshell.be/~swrh/ssd-tests/log-20090831_0908-MK020903042C0001B-prosar_fcc2-bs_4M.tar.gz

Each file contained in the tarball is described below:

- log-20090831_0908-MK020903042C0001B-prosar_fcc2-bs_4M-{0-ddv,1-table,2-plot}.*
  Logging statistics. The "0-ddv" file is the log of a modified
version of the dd(1) utility to print writing statistics after every
write(2), "1-table" is a table version of the "0-ddv" file parsed with
awk(1) and "2-plot" is the a plot image of the writing statistics.

- log-20090831_0908-MK020903042C0001B-prosar_fcc2-bs_4M-0-dstat.log
  The result of "dstat > dstat.log" executed ~3 seconds after "ddv" started.

- log-20090831_0908-MK020903042C0001B-prosar_fcc2-bs_4M-0-iostat.log
  The result of "iostat 1 > iostat.log" executed ~15 seconds after
"ddv" started.

- log-20090831_0908-MK020903042C0001B-prosar_fcc2-bs_4M-0-dmesg_writeback_{1..67}.log
  The result of "for i in {1..67}; do dmesg > dmesg-writeback-$i.log;
sleep 1; done" executed when "ddv" had written about 42GB of data.

- log-20090831_0908-MK020903042C0001B-prosar_fcc2-bs_4M-0-dmesg_writeback_{100..186}.log
  The result of "for i in {100..186}; do dmesg >
dmesg-writeback-$i.log; sleep 1; done" executed when "ddv" had written
about 51GB of data.

Please tell me if you need anything else. I could make a more
synchronized or informative logging output if you need.

Best regards,

-- 
Fernando Silveira <fsilveira@gmail.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: I/O and pdflush
  2009-08-31 13:24         ` Fernando Silveira
@ 2009-08-31 14:00           ` Wu Fengguang
  2009-08-31 14:01             ` Wu Fengguang
  0 siblings, 1 reply; 16+ messages in thread
From: Wu Fengguang @ 2009-08-31 14:00 UTC (permalink / raw)
  To: Fernando Silveira; +Cc: linux-kernel

Hi Fernando,

What's your SSD's IO parameters? Ie. output of this command:

        grep -r . /sys/block/sda/queue/        

Please replace 'sda' with your SSD device name.

Thanks,
Fengguang

On Mon, Aug 31, 2009 at 09:24:20PM +0800, Fernando Silveira wrote:
> On Sat, Aug 29, 2009 at 07:21, Wu Fengguang<fengguang.wu@intel.com> wrote:
> > On Sat, Aug 29, 2009 at 06:12:47PM +0800, Wu Fengguang wrote:
> >> On Sat, Aug 29, 2009 at 05:48:40AM +0800, Fernando Silveira wrote:
> >> > On Sun, Jul 12, 2009 at 05:04, Wu Fengguang<fengguang.wu@intel.com> wrote:
> >> > > On Sat, Jul 11, 2009 at 02:27:25PM -0300, Fernando Silveira wrote:
> >> > >> I'm having a hard time with an application that writes sequentially
> >> > >> 250GB of non-stop data directly to a solid state disk (OCZ SSD CORE
> >> > >> v2) device and I hope you can help me. The command "dd if=/dev/zero
> >> > >> of=/dev/sdc bs=4M" reproduces the same symptoms I'm having and writes
> >> > >> exactly as that application does.
> >>
> >> Sure. Attached is a writeback debug patch. It will generate lots of
> >> kernel messages. You can just stop your klogd, start your workload
> >> and monitor the SSD writeback throughput via tools like iostat/dstat.
> >
> > Fernando, will you post the output of iostat/dstat too? Thanks!
> > The ssd-no_dirty_buffer_with_random_192mb_writes.png is a good overview,
> > however I'd like to also check out the numbers for each second :)
> >
> >> When it goes into the suboptimal 25MB/s state for several seconds, run
> >>         dmesg > dmesg-writeback
> >> and send me the log.
> >>
> >> And it is advised to turn on the kconfig option CONFIG_PRINTK_TIME=y.
> 
> Thanks for your help.
> 
> You can reach in the URL below a tarball which contains the data I
> logged after patching the kernel with only your last patch.
> 
> http://rootshell.be/~swrh/ssd-tests/log-20090831_0908-MK020903042C0001B-prosar_fcc2-bs_4M.tar.gz
> 
> Each file contained in the tarball is described below:
> 
> - log-20090831_0908-MK020903042C0001B-prosar_fcc2-bs_4M-{0-ddv,1-table,2-plot}.*
>   Logging statistics. The "0-ddv" file is the log of a modified
> version of the dd(1) utility to print writing statistics after every
> write(2), "1-table" is a table version of the "0-ddv" file parsed with
> awk(1) and "2-plot" is the a plot image of the writing statistics.
> 
> - log-20090831_0908-MK020903042C0001B-prosar_fcc2-bs_4M-0-dstat.log
>   The result of "dstat > dstat.log" executed ~3 seconds after "ddv" started.
> 
> - log-20090831_0908-MK020903042C0001B-prosar_fcc2-bs_4M-0-iostat.log
>   The result of "iostat 1 > iostat.log" executed ~15 seconds after
> "ddv" started.
> 
> - log-20090831_0908-MK020903042C0001B-prosar_fcc2-bs_4M-0-dmesg_writeback_{1..67}.log
>   The result of "for i in {1..67}; do dmesg > dmesg-writeback-$i.log;
> sleep 1; done" executed when "ddv" had written about 42GB of data.
> 
> - log-20090831_0908-MK020903042C0001B-prosar_fcc2-bs_4M-0-dmesg_writeback_{100..186}.log
>   The result of "for i in {100..186}; do dmesg >
> dmesg-writeback-$i.log; sleep 1; done" executed when "ddv" had written
> about 51GB of data.
> 
> Please tell me if you need anything else. I could make a more
> synchronized or informative logging output if you need.
> 
> Best regards,
> 
> -- 
> Fernando Silveira <fsilveira@gmail.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: I/O and pdflush
  2009-08-31 14:00           ` Wu Fengguang
@ 2009-08-31 14:01             ` Wu Fengguang
  2009-08-31 14:07               ` Wu Fengguang
  0 siblings, 1 reply; 16+ messages in thread
From: Wu Fengguang @ 2009-08-31 14:01 UTC (permalink / raw)
  To: Fernando Silveira; +Cc: linux-kernel

On Mon, Aug 31, 2009 at 10:00:06PM +0800, Wu Fengguang wrote:
> Hi Fernando,
> 
> What's your SSD's IO parameters? Ie. output of this command:
> 
>         grep -r . /sys/block/sda/queue/        
> 
> Please replace 'sda' with your SSD device name.

Oh I guess it's sdc:

         grep -r . /sys/block/sdc/queue/        

Thanks,
Fengguang


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: I/O and pdflush
  2009-08-31 14:01             ` Wu Fengguang
@ 2009-08-31 14:07               ` Wu Fengguang
  2009-08-31 14:33                 ` Fernando Silveira
  0 siblings, 1 reply; 16+ messages in thread
From: Wu Fengguang @ 2009-08-31 14:07 UTC (permalink / raw)
  To: Fernando Silveira; +Cc: linux-kernel

On Mon, Aug 31, 2009 at 10:01:13PM +0800, Wu Fengguang wrote:
> On Mon, Aug 31, 2009 at 10:00:06PM +0800, Wu Fengguang wrote:
> > Hi Fernando,
> > 
> > What's your SSD's IO parameters? Ie. output of this command:
> > 
> >         grep -r . /sys/block/sda/queue/        
> > 
> > Please replace 'sda' with your SSD device name.
> 
> Oh I guess it's sdc:
> 
>          grep -r . /sys/block/sdc/queue/        

BTW, would you run "iostat -x 1 5" (which will run 5 seconds) when
doing I/O in ideal throughput, and when in 25MB/s thoughput state?

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: I/O and pdflush
  2009-08-31 14:07               ` Wu Fengguang
@ 2009-08-31 14:33                 ` Fernando Silveira
  2009-09-01  8:14                   ` Wu Fengguang
  0 siblings, 1 reply; 16+ messages in thread
From: Fernando Silveira @ 2009-08-31 14:33 UTC (permalink / raw)
  To: Wu Fengguang; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1483 bytes --]

On Mon, Aug 31, 2009 at 11:07, Wu Fengguang<fengguang.wu@intel.com> wrote:
> On Mon, Aug 31, 2009 at 10:01:13PM +0800, Wu Fengguang wrote:
>> On Mon, Aug 31, 2009 at 10:00:06PM +0800, Wu Fengguang wrote:
>> > Hi Fernando,
>> >
>> > What's your SSD's IO parameters? Ie. output of this command:
>> >
>> >         grep -r . /sys/block/sda/queue/
>> >
>> > Please replace 'sda' with your SSD device name.
>>
>> Oh I guess it's sdc:
>>
>>          grep -r . /sys/block/sdc/queue/

Here is it:

# grep -r . /sys/block/sdc/queue/
/sys/block/sdc/queue/nr_requests:128
/sys/block/sdc/queue/read_ahead_kb:128
/sys/block/sdc/queue/max_hw_sectors_kb:128
/sys/block/sdc/queue/max_sectors_kb:128
/sys/block/sdc/queue/scheduler:noop anticipatory [deadline] cfq
/sys/block/sdc/queue/hw_sector_size:512
/sys/block/sdc/queue/rotational:0
/sys/block/sdc/queue/nomerges:0
/sys/block/sdc/queue/rq_affinity:0
/sys/block/sdc/queue/iostats:1
/sys/block/sdc/queue/iosched/read_expire:500
/sys/block/sdc/queue/iosched/write_expire:5000
/sys/block/sdc/queue/iosched/writes_starved:2
/sys/block/sdc/queue/iosched/front_merges:1
/sys/block/sdc/queue/iosched/fifo_batch:16
#

These are probably default settings.

> BTW, would you run "iostat -x 1 5" (which will run 5 seconds) when
> doing I/O in ideal throughput, and when in 25MB/s thoughput state?

Both files are attached (25mbps = 25MB/s, 80mbps = 80MB/s).

Thanks.

-- 
Fernando Silveira <fsilveira@gmail.com>

[-- Attachment #2: log-20090831_0908-MK020903042C0001B-prosar_fcc2-bs_4M-0-iostat_x_1_5-80mbps_3.log --]
[-- Type: application/octet-stream, Size: 5411 bytes --]

Linux 2.6.30.1-fs33 (prosar-fcc2) 	08/31/2009 	_i686_

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.16    0.00    8.16   74.81    0.00   16.86

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.06     5.50    1.40    7.00    22.13    99.99    14.53     2.07  245.98  29.26  24.59
sda1              0.06     5.50    1.40    7.00    22.09    99.99    14.53     2.07  246.04  29.27  24.59
sdb               1.04     0.00    0.02    0.00     1.16     0.00    74.00     0.00    1.35   0.51   0.00
sdb1              0.04     0.00    0.00    0.00     0.13     0.00    36.53     0.00    0.56   0.44   0.00
sdb2              1.00     0.00    0.01    0.00     1.01     0.00    97.29     0.00    1.82   0.65   0.00
sdc               1.04 46806.13    0.02  368.56     1.16 47172.95   127.99   117.32  318.29   2.24  82.62
sdc1              0.04     0.00    0.00    0.00     0.13     0.00    36.53     0.00    0.71   0.53   0.00
sdc2              1.00 46806.13    0.01  368.56     1.01 47172.95   127.99   117.32  318.30   2.24  82.62

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00   27.00   73.00    0.00    0.00

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00    10.00    0.00   17.00     0.00   216.00    12.71     0.49    6.12   7.76  13.20
sda1              0.00    10.00    0.00   17.00     0.00   216.00    12.71     0.49    6.12   7.76  13.20
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc               0.00 153328.00    0.00 1196.00     0.00 153088.00   128.00   137.36  114.77   0.84 100.10
sdc1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc2              0.00 153328.00    0.00 1196.00     0.00 153088.00   128.00   137.36  114.77   0.84 100.10

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00   27.86   72.14    0.00    0.00

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00    8.00     0.00    64.00     8.00     5.41  537.62 125.12 100.10
sda1              0.00     0.00    0.00    8.00     0.00    64.00     8.00     5.41  537.62 125.12 100.10
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc               0.00 153504.00    0.00 1200.00     0.00 153600.00   128.00   138.35  115.76   0.83 100.10
sdc1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc2              0.00 153504.00    0.00 1200.00     0.00 153600.00   128.00   138.35  115.76   0.83 100.10

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00   27.14   72.86    0.00    0.00

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     1.98    0.00    8.91     0.00    87.13     9.78     2.24  417.44 105.33  93.86
sda1              0.00     1.98    0.00    8.91     0.00    87.13     9.78     2.24  417.44 105.33  93.86
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc               0.00 149798.02    0.00 1187.13     0.00 151952.48   128.00   137.76  115.17   0.84  99.21
sdc1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc2              0.00 149798.02    0.00 1187.13     0.00 151952.48   128.00   137.76  115.17   0.84  99.21

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.50    0.00   23.38   76.12    0.00    0.00

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     1.00    0.00   12.00     0.00   104.00     8.67     0.09    7.42   5.75   6.90
sda1              0.00     1.00    0.00   12.00     0.00   104.00     8.67     0.09    7.42   5.75   6.90
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc               0.00 129120.00    0.00 1013.00     0.00 129664.00   128.00   139.81  116.65   0.99 100.00
sdc1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc2              0.00 129120.00    0.00 1013.00     0.00 129664.00   128.00   139.81  116.65   0.99 100.00


[-- Attachment #3: log-20090831_0908-MK020903042C0001B-prosar_fcc2-bs_4M-0-iostat_x_1_5-25mbps.log --]
[-- Type: application/octet-stream, Size: 5395 bytes --]

Linux 2.6.30.1-fs33 (prosar-fcc2) 	08/31/2009 	_i686_

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.16    0.00    7.93   74.27    0.00   17.64

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.07     5.72    0.57    6.91    15.94   101.03    15.63     2.12  283.40  32.21  24.11
sda1              0.07     5.72    0.57    6.91    15.91   101.03    15.63     2.12  283.48  32.22  24.11
sdb               1.10     0.00    0.02    0.00     1.23     0.00    74.00     0.00    1.35   0.51   0.00
sdb1              0.05     0.00    0.00    0.00     0.14     0.00    36.53     0.00    0.56   0.44   0.00
sdb2              1.06     0.00    0.01    0.00     1.07     0.00    97.29     0.00    1.82   0.65   0.00
sdc               1.10 45638.09    0.02  359.34     1.23 45995.35   128.00   115.88  322.44   2.27  81.58
sdc1              0.05     0.00    0.00    0.00     0.14     0.00    36.53     0.00    0.71   0.53   0.00
sdc2              1.06 45638.09    0.01  359.34     1.07 45995.35   128.00   115.88  322.45   2.27  81.58

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    4.00   96.00    0.00    0.00

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     1.00    0.00    5.00     0.00    48.00     9.60     0.41   82.20  52.40  26.20
sda1              0.00     1.00    0.00    5.00     0.00    48.00     9.60     0.41   82.20  52.40  26.20
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc               0.00 24384.00    0.00  197.00     0.00 25216.00   128.00   142.38  361.40   5.08 100.10
sdc1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc2              0.00 24384.00    0.00  197.00     0.00 25216.00   128.00   142.38  361.40   5.08 100.10

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00   13.00   87.00    0.00    0.00

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00    9.00     0.00    72.00     8.00     0.99  110.56  59.33  53.40
sda1              0.00     0.00    0.00    9.00     0.00    72.00     8.00     0.99  110.56  59.33  53.40
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc               0.00 69088.00    0.00  552.00     0.00 70656.00   128.00   142.75  386.39   1.81 100.10
sdc1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc2              0.00 69088.00    0.00  552.00     0.00 70656.00   128.00   142.75  386.39   1.81 100.10

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    6.00   94.00    0.00    0.00

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00    10.00    0.00    9.00     0.00   144.00    16.00     4.36  169.89  84.22  75.80
sda1              0.00    10.00    0.00    9.00     0.00   144.00    16.00     4.36  169.89  84.22  75.80
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc               0.00 26543.00    0.00  195.00     0.00 24960.00   128.00   143.91  298.28   5.13 100.10
sdc1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc2              0.00 26543.00    0.00  195.00     0.00 24960.00   128.00   143.91  298.28   5.13 100.10

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00   12.00   88.00    0.00    0.00

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00    7.00     0.00    64.00     9.14     5.02  947.00 143.00 100.10
sda1              0.00     0.00    0.00    7.00     0.00    64.00     9.14     5.02  947.00 143.00 100.10
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc               0.00 70993.00    0.00  568.00     0.00 72704.00   128.00   142.37  403.06   1.76 100.10
sdc1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc2              0.00 70993.00    0.00  568.00     0.00 72704.00   128.00   142.37  403.06   1.76 100.10


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: I/O and pdflush
  2009-08-31 14:33                 ` Fernando Silveira
@ 2009-09-01  8:14                   ` Wu Fengguang
       [not found]                     ` <6afc6d4a0909010710l2cf77fbbmb1ab192ed12a7efc@mail.gmail.com>
  0 siblings, 1 reply; 16+ messages in thread
From: Wu Fengguang @ 2009-09-01  8:14 UTC (permalink / raw)
  To: Fernando Silveira; +Cc: linux-kernel

On Mon, Aug 31, 2009 at 10:33:43PM +0800, Fernando Silveira wrote:
> On Mon, Aug 31, 2009 at 11:07, Wu Fengguang<fengguang.wu@intel.com> wrote:
> > On Mon, Aug 31, 2009 at 10:01:13PM +0800, Wu Fengguang wrote:
> >> On Mon, Aug 31, 2009 at 10:00:06PM +0800, Wu Fengguang wrote:
> >> > Hi Fernando,
> >> >
> >> > What's your SSD's IO parameters? Ie. output of this command:
> >> >
> >> >         grep -r . /sys/block/sda/queue/
> >> >
> >> > Please replace 'sda' with your SSD device name.
> >>
> >> Oh I guess it's sdc:
> >>
> >>          grep -r . /sys/block/sdc/queue/
> 
> Here is it:
> 
> # grep -r . /sys/block/sdc/queue/
> /sys/block/sdc/queue/nr_requests:128
> /sys/block/sdc/queue/read_ahead_kb:128
> /sys/block/sdc/queue/max_hw_sectors_kb:128
> /sys/block/sdc/queue/max_sectors_kb:128
> /sys/block/sdc/queue/scheduler:noop anticipatory [deadline] cfq
> /sys/block/sdc/queue/hw_sector_size:512
> /sys/block/sdc/queue/rotational:0
> /sys/block/sdc/queue/nomerges:0
> /sys/block/sdc/queue/rq_affinity:0
> /sys/block/sdc/queue/iostats:1
> /sys/block/sdc/queue/iosched/read_expire:500
> /sys/block/sdc/queue/iosched/write_expire:5000
> /sys/block/sdc/queue/iosched/writes_starved:2
> /sys/block/sdc/queue/iosched/front_merges:1
> /sys/block/sdc/queue/iosched/fifo_batch:16
> #
> 
> These are probably default settings.
> 
> > BTW, would you run "iostat -x 1 5" (which will run 5 seconds) when
> > doing I/O in ideal throughput, and when in 25MB/s thoughput state?
> 
> Both files are attached (25mbps = 25MB/s, 80mbps = 80MB/s).

The iostat reported IO size is 64kb, which is half of max_sectors_kb=128.
It is strange why the optimal 128kb IO size is not reached in both cases:

        Device:         rrqm/s   wrqm/s     r/s     w/s    rsec/s   wsec/s  avgrq-sz avgqu-sz   await  svctm  %util
case 1: sdc               0.00 69088.00     0.00  552.00     0.00 70656.00    128.00   142.75  386.39   1.81 100.10
case 2: sdc               0.00 153504.00    0.00 1200.00     0.00 153600.00   128.00   138.35  115.76   0.83 100.10

Fernando, could you try increasing these deadline parameters by 10
times?

        echo 160   > /sys/block/sdc/queue/iosched/fifo_batch
        echo 50000 > /sys/block/sdc/queue/iosched/write_expire

And try cfq iosched if that still fails? The iostat outputs would be
enough during the tests.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: I/O and pdflush
       [not found]                     ` <6afc6d4a0909010710l2cf77fbbmb1ab192ed12a7efc@mail.gmail.com>
@ 2009-09-02  3:05                       ` Wu Fengguang
       [not found]                         ` <6afc6d4a0909020429l2bfecee9xd00527fcaa323751@mail.gmail.com>
  0 siblings, 1 reply; 16+ messages in thread
From: Wu Fengguang @ 2009-09-02  3:05 UTC (permalink / raw)
  To: Fernando Silveira; +Cc: linux-kernel

On Tue, Sep 01, 2009 at 10:10:53PM +0800, Fernando Silveira wrote:
> On Tue, Sep 1, 2009 at 05:14, Wu Fengguang<fengguang.wu@intel.com> wrote:
> > On Mon, Aug 31, 2009 at 10:33:43PM +0800, Fernando Silveira wrote:
> >> On Mon, Aug 31, 2009 at 11:07, Wu Fengguang<fengguang.wu@intel.com> wrote:
> >> > On Mon, Aug 31, 2009 at 10:01:13PM +0800, Wu Fengguang wrote:
> >> >> On Mon, Aug 31, 2009 at 10:00:06PM +0800, Wu Fengguang wrote:
> >> >> > Hi Fernando,
> >> >> >
> >> >> > What's your SSD's IO parameters? Ie. output of this command:
> >> >> >
> >> >> >         grep -r . /sys/block/sda/queue/
> >> >> >
> >> >> > Please replace 'sda' with your SSD device name.
> >> >>
> >> >> Oh I guess it's sdc:
> >> >>
> >> >>          grep -r . /sys/block/sdc/queue/
> >>
> >> Here is it:
> >>
> >> # grep -r . /sys/block/sdc/queue/
> >> /sys/block/sdc/queue/nr_requests:128
> >> /sys/block/sdc/queue/read_ahead_kb:128
> >> /sys/block/sdc/queue/max_hw_sectors_kb:128
> >> /sys/block/sdc/queue/max_sectors_kb:128
> >> /sys/block/sdc/queue/scheduler:noop anticipatory [deadline] cfq
> >> /sys/block/sdc/queue/hw_sector_size:512
> >> /sys/block/sdc/queue/rotational:0
> >> /sys/block/sdc/queue/nomerges:0
> >> /sys/block/sdc/queue/rq_affinity:0
> >> /sys/block/sdc/queue/iostats:1
> >> /sys/block/sdc/queue/iosched/read_expire:500
> >> /sys/block/sdc/queue/iosched/write_expire:5000
> >> /sys/block/sdc/queue/iosched/writes_starved:2
> >> /sys/block/sdc/queue/iosched/front_merges:1
> >> /sys/block/sdc/queue/iosched/fifo_batch:16
> >> #
> >>
> >> These are probably default settings.
> >>
> >> > BTW, would you run "iostat -x 1 5" (which will run 5 seconds) when
> >> > doing I/O in ideal throughput, and when in 25MB/s thoughput state?
> >>
> >> Both files are attached (25mbps = 25MB/s, 80mbps = 80MB/s).
> >
> > The iostat reported IO size is 64kb, which is half of max_sectors_kb=128.
> > It is strange why the optimal 128kb IO size is not reached in both cases:
> >
> >        Device:         rrqm/s   wrqm/s     r/s     w/s    rsec/s   wsec/s  avgrq-sz avgqu-sz   await  svctm  %util
> > case 1: sdc               0.00 69088.00     0.00  552.00     0.00 70656.00    128.00   142.75  386.39   1.81 100.10
> > case 2: sdc               0.00 153504.00    0.00 1200.00     0.00 153600.00   128.00   138.35  115.76   0.83 100.10
> >
> > Fernando, could you try increasing these deadline parameters by 10
> > times?
> >
> >        echo 160   > /sys/block/sdc/queue/iosched/fifo_batch
> >        echo 50000 > /sys/block/sdc/queue/iosched/write_expire
> 
> No changes. The iostat log is attached with the 20090901_1026 time tag
> in its name.
> 
> > And try cfq iosched if that still fails? The iostat outputs would be
> > enough during the tests.
> 
> I had already tried it before posting here, but not with iostat
> logging. The log is attached as 20090901_1037.
> 
> Please tell me if you need anything else.

OK, these traces show how performance suddenly drops:

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sdc               0.00 138176.00    0.00 1098.00     0.00 140544.00   128.00   141.85  129.43   0.91 100.00
sdc               0.00 142240.00    0.00 1105.00     0.00 141440.00   128.00   141.78  128.57   0.90 100.00
sdc               0.00 140335.00    0.00 1105.00     0.00 141440.00   128.00   139.97  127.12   0.90 100.00
sdc               0.00 134858.42    0.00 1085.15     0.00 138899.01   128.00   138.91  127.34   0.91  99.01
sdc               0.00 142050.00    0.00 1099.00     0.00 140672.00   128.00   141.93  129.04   0.91 100.00
sdc               0.00 138176.00    0.00 1099.00     0.00 140672.00   128.00   141.92  129.35   0.91 100.00
sdc               0.00 109728.00    0.00  851.00     0.00 108928.00   128.00   143.74  168.61   1.18 100.00
sdc               0.00 138176.00    0.00 1101.00     0.00 140928.00   128.00   141.92  129.05   0.91 100.00
sdc               0.00 138176.00    0.00 1091.00     0.00 139776.00   128.12   141.87  129.93   0.92 100.00
sdc               0.00 142240.00    0.00 1105.00     0.00 141312.00   127.88   141.90  128.30   0.90 100.00
sdc               0.00 138176.00    0.00 1106.00     0.00 141568.00   128.00   141.78  128.49   0.90 100.00
sdc               0.00 142240.00    0.00 1100.00     0.00 140800.00   128.00   141.88  129.01   0.91 100.00
sdc               0.00 136807.92    0.00 1094.06     0.00 140039.60   128.00   140.49  128.64   0.90  99.01
sdc               0.00 129935.00    0.00 1073.00     0.00 137224.00   127.89   122.86  118.75   0.93 100.00
sdc               0.00 79368.00    0.00  581.00     0.00 74368.00   128.00   114.97  175.83   1.72 100.00
sdc               0.00 73152.00    0.00  575.00     0.00 73600.00   128.00   142.58  246.75   1.74 100.00
sdc               0.00 73152.00    0.00  575.00     0.00 73600.00   128.00   142.60  248.41   1.74 100.00
sdc               0.00 73937.00    0.00  580.00     0.00 74240.00   128.00   142.47  246.14   1.72 100.00
sdc               0.00 76431.00    0.00  578.00     0.00 73984.00   128.00   142.55  246.02   1.73 100.00
sdc               0.00 48768.00    0.00  408.00     0.00 52224.00   128.00   139.84  246.35   2.45 100.00
sdc               0.00 48285.15    0.00  377.23     0.00 48285.15   128.00   138.09  468.92   2.62  99.01
sdc               0.00 65024.00    0.00  515.00     0.00 65920.00   128.00   141.67  251.21   1.94 100.00
sdc               0.00 36456.00    0.00  264.00     0.00 33792.00   128.00   137.37  567.71   3.79 100.00
sdc               0.00 63627.00    0.00  572.00     0.00 73216.00   128.00   138.69  249.88   1.75 100.00
sdc               0.00 33267.00    0.00  216.00     0.00 27648.00   128.00   134.71  259.25   4.63 100.00
sdc               0.00 76334.00    0.00  578.00     0.00 74112.00   128.22   142.35  375.04   1.73 100.00
sdc               0.00 52952.00    0.00  418.00     0.00 53376.00   127.69   145.37  249.30   2.39 100.00
sdc               0.00 44584.00    0.00  356.00     0.00 45568.00   128.00   146.73  524.89   2.81 100.10
sdc               0.00 52952.00    0.00  412.00     0.00 52736.00   128.00   145.21  251.41   2.42  99.90
sdc               0.00 44142.57    0.00  352.48     0.00 45116.83   128.00   145.05  531.77   2.81  99.01
sdc               0.00 52952.00    0.00  412.00     0.00 52736.00   128.00   145.43  248.08   2.43 100.00
sdc               0.00 40640.00    0.00  336.00     0.00 43008.00   128.00   146.81  564.42   2.98 100.00
sdc               0.00 56896.00    0.00  432.00     0.00 55296.00   128.00   144.92  251.52   2.31 100.00
sdc               0.00 36636.00    0.00  314.00     0.00 40192.00   128.00   147.12  583.40   3.18 100.00
sdc               0.00 60900.00    0.00  454.00     0.00 58112.00   128.00   144.48  257.85   2.20 100.00
sdc               0.00 32512.00    0.00  277.00     0.00 35456.00   128.00   147.73  631.81   3.61 100.00

Is it tightly related to two interleaved write streams (the performance drop
starts and stops with the second write stream)?  If so, it sounds like a SSD
hardware/firmware problem.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: I/O and pdflush
       [not found]                             ` <6afc6d4a0909031346qda0b17coe4c60250fcac827f@mail.gmail.com>
@ 2009-09-04  2:21                               ` Wu Fengguang
  2009-09-04  2:34                                 ` Wu Fengguang
  0 siblings, 1 reply; 16+ messages in thread
From: Wu Fengguang @ 2009-09-04  2:21 UTC (permalink / raw)
  To: Fernando Silveira; +Cc: LKML

On Fri, Sep 04, 2009 at 04:46:14AM +0800, Fernando Silveira wrote:
> On Wed, Sep 2, 2009 at 09:50, Wu Fengguang<fengguang.wu@intel.com> wrote:
> > On Wed, Sep 02, 2009 at 07:29:37PM +0800, Fernando Silveira wrote:
> >> On Wed, Sep 2, 2009 at 00:05, Wu Fengguang<fengguang.wu@intel.com> wrote:
> >> > Is it tightly related to two interleaved write streams (the performance drop
> >> > starts and stops with the second write stream)?  If so, it sounds like a SSD
> >> > hardware/firmware problem.
> >>
> >> By write stream you mean more than one sdc file descriptor writing? I
> >> don't usually do that, except for some tests (e.g. the "20090831_0908
> >> iostat_x_1_5-80mbps_3" test). In that last test I did not do that.
> >>
> >> The performance drops even when the whole system is idle, with no
> >> process other than the essential and my program that does nothing but
> >> open("/dev/sdc", O_WRONLY) and loops in write(fd, buf, 4194304).
> >
> > Yes, I mean concurrently writing to two large files.
> 
> Sorry, but that cannot be happening in my tests. As I said before, the
> only processes running are "init", "udevd", "sshd", "getty", "bash"
> and "ddv". In the "dmesg" logs you can see that the only processes
> that are writing to the disk are "ddv" and "pdflush". I just cannot
> see another stream being written.
> 
> Please tell me if I'm missing something.

OK, so performance can go bad even for one single dd command?

> > Note that the
> > low level disk write start/stop time is normally not equal to
> > application(eg. dd) write start/stop time because of delayed write.
> >
> > Passing "oflag=direct" to dd can make the two times aligned.
> 
> I was aware of that, however I tried it and now the problem seems to
> be much more difficult to be reproduced. I've being trying for a
> couple of hours and so far all I could see was a small performance
> drop from 80MB/s (now almost steady) to 60MB/s for some seconds
> (approximately 16).
> 
> Could this possibly mean something else to you?

So performance is good on two concurrent O_DIRECT dd commands?

> > These commands can show the block IO internals and help confirm
> > whether it's the kernel or hardware's fault:
> >
> >        blktrace -w1 /dev/sdc # do this while throughput is 25MB/s
> >        blkparse sdc
> 
> I did that but I couldn't identify whose fault is. I don't understand
> much about I/O internals (still trying to) and I wonder if you could
> check it for me, please.
> 
> The new trace logs are in this tarball:
> 
> http://rootshell.be/~swrh/ssd-tests/log-20090903_1323-MK020903042C0001B-prosar_fcc2-bs_4M-0-blktrace_25mbps.tar.gz

The kernel seems to do its job fine. I do see one obvious pattern(*).
The driver seems to complete IO requests _in batches_, which in turn
make the kernel issue IO requests in batches. Which might choke the
hardware/firmware in some subtle way.

In direct dd(s), the io queue is limited in size, so this kind of
batching effect will not be obvious, which could explain why it
performs better. 

Thanks,
Fengguang
---

(*) `blkparse sdc` shows repeated patterns of

[thousands of A-Q-M sequences by ddv or pdflush]
  8,32   0    11992     0.093682688  2417  A   W 210184323 + 1 <- (8,34) 194119323
  8,32   0    11993     0.093682832  2417  Q   W 210184323 + 1 [ddv]
  8,32   0    11994     0.093683066  2417  M   W 210184323 + 1 [ddv]
  8,32   0    11995     0.093683552  2417  A   W 210184324 + 1 <- (8,34) 194119324
  8,32   0    11996     0.093683702  2417  Q   W 210184324 + 1 [ddv]
  8,32   0    11997     0.093683930  2417  M   W 210184324 + 1 [ddv]
  8,32   0    11998     0.093684422  2417  A   W 210184325 + 1 <- (8,34) 194119325
  8,32   0    11999     0.093684566  2417  Q   W 210184325 + 1 [ddv]
  8,32   0    12000     0.093684799  2417  M   W 210184325 + 1 [ddv]
  8,32   0    12001     0.093685285  2417  A   W 210184326 + 1 <- (8,34) 194119326
  8,32   0    12002     0.093685429  2417  Q   W 210184326 + 1 [ddv]
  8,32   0    12003     0.093685663  2417  M   W 210184326 + 1 [ddv]
  8,32   0    12004     0.093686155  2417  A   W 210184327 + 1 <- (8,34) 194119327
  8,32   0    12005     0.093686299  2417  Q   W 210184327 + 1 [ddv]
  8,32   0    12006     0.093686533  2417  M   W 210184327 + 1 [ddv]
[near 100 C-D sequences]
  8,32   1    12378     0.094161760     0  C   W 210164352 + 128 [0]
  8,32   1    12379     0.094318451     0  D   W 210164608 + 128 [swapper]
  8,32   0    12007     0.095964571  2417  C   W 210164480 + 128 [0]
  8,32   0    12008     0.096097816  2417  D   W 210164736 + 128 [ddv]
  8,32   0    12009     0.097421849   211  C   W 210164608 + 128 [0]
  8,32   0    12010     0.097566626   211  D   W 210164864 + 128 [kswapd0]
  8,32   1    12380     0.099029562     0  C   W 210164736 + 128 [0]
  8,32   1    12381     0.099199427     0  D   W 210164992 + 128 [swapper]
  8,32   0    12011     0.101573454  2417  C   W 210164864 + 128 [0]
  8,32   0    12012     0.101722292  2417  D   W 210165120 + 128 [ddv]
  8,32   1    12382     0.103317406     0  C   W 210164992 + 128 [0]
  8,32   1    12383     0.103491195     0  D   W 210165248 + 128 [swapper]
  8,32   0    12013     0.104819283  2417  C   W 210165120 + 128 [0]
  8,32   0    12014     0.104966411  2417  D   W 210165376 + 128 [ddv]
  8,32   0    12015     0.106378658  2417  C   W 210165248 + 128 [0]
  8,32   0    12016     0.106545098  2417  D   W 210165504 + 128 [ddv]
  8,32   1    12384     0.108959520     0  C   W 210165376 + 128 [0]
  8,32   1    12385     0.109088854     0  D   W 210165632 + 128 [swapper]
  8,32   0    12017     0.110761352     0  C   W 210165504 + 128 [0]
  8,32   0    12018     0.110881820     0  D   W 210165760 + 128 [swapper]
  8,32   1    12386     0.112203050     0  C   W 210165632 + 128 [0]
  8,32   1    12387     0.112310847     0  D   W 210165888 + 128 [swapper]
  8,32   0    12019     0.113762264     0  C   W 210165760 + 128 [0]
  8,32   0    12020     0.113868915     0  D   W 210166016 + 128 [swapper]
  8,32   0    12021     0.116295958     0  C   W 210165888 + 128 [0]
  8,32   0    12022     0.116405740     0  D   W 210166144 + 128 [swapper]
  8,32   1    12388     0.118039279     0  C   W 210166016 + 128 [0]
  8,32   1    12389     0.118146452     0  D   W 210166272 + 128 [swapper]
  8,32   0    12023     0.119468271     0  C   W 210166144 + 128 [0]
  8,32   0    12024     0.119569714     0  D   W 210166400 + 128 [swapper]
  8,32   1    12390     0.120972296     0  C   W 210166272 + 128 [0]
  8,32   1    12391     0.121081701     0  D   W 210166528 + 128 [swapper]
  8,32   1    12392     0.123565552     0  C   W 210166400 + 128 [0]
  8,32   1    12393     0.123674146     0  D   W 210166656 + 128 [swapper]
  8,32   0    12025     0.125309227     0  C   W 210166528 + 128 [0]
  8,32   0    12026     0.125417174     0  D   W 210166784 + 128 [swapper]
  8,32   1    12394     0.126739754     0  C   W 210166656 + 128 [0]
  8,32   1    12395     0.126845037     0  D   W 210166912 + 128 [swapper]
  8,32   0    12027     0.128297684     0  C   W 210166784 + 128 [0]
  8,32   0    12028     0.128407466     0  D   W 210167040 + 128 [swapper]
  8,32   1    12396     0.130840911     0  C   W 210166912 + 128 [0]
  8,32   1    12397     0.130947693     0  D   W 210167168 + 128 [swapper]
  8,32   1    12398     0.132628003     0  C   W 210167040 + 128 [0]
  8,32   1    12399     0.132732236     0  D   W 210167296 + 128 [swapper]
  8,32   0    12029     0.134054889     0  C   W 210167168 + 128 [0]
  8,32   0    12030     0.134160082     0  D   W 210167424 + 128 [swapper]
  8,32   1    12400     0.135563066     0  C   W 210167296 + 128 [0]
  8,32   1    12401     0.135664905     0  D   W 210167552 + 128 [swapper]
  8,32   0    12031     0.138157000     0  C   W 210167424 + 128 [0]
  8,32   0    12032     0.138263110     0  D   W 210167680 + 128 [swapper]
  8,32   1    12402     0.139842121     0  C   W 210167552 + 128 [0]
  8,32   1    12403     0.139945646     0  D   W 210167808 + 128 [swapper]
  8,32   1    12404     0.141269145   210  C   W 210167680 + 128 [0]
  8,32   1    12405     0.141377805   210  D   W 210167936 + 128 [pdflush]
  8,32   0    12033     0.142830026     0  C   W 210167808 + 128 [0]
  8,32   0    12034     0.142936449     0  D   W 210168064 + 128 [swapper]
  8,32   1    12406     0.145314280     0  C   W 210167936 + 128 [0]
  8,32   1    12407     0.145418662     0  D   W 210168192 + 128 [swapper]
  8,32   0    12035     0.147057325     0  C   W 210168064 + 128 [0]
  8,32   0    12036     0.147159590     0  D   W 210168320 + 128 [swapper]
  8,32   1    12408     0.148482753     0  C   W 210168192 + 128 [0]
  8,32   1    12409     0.148587813     0  D   W 210168448 + 128 [swapper]
  8,32   1    12410     0.149991158     0  C   W 210168320 + 128 [0]
  8,32   1    12411     0.150096956     0  D   W 210168576 + 128 [swapper]
  8,32   0    12037     0.152584743     0  C   W 210168448 + 128 [0]
  8,32   0    12038     0.152690919     0  D   W 210168704 + 128 [swapper]
  8,32   1    12412     0.154328700     0  C   W 210168576 + 128 [0]
  8,32   1    12413     0.154431589     0  D   W 210168832 + 128 [swapper]
  8,32   0    12039     0.155754320     0  C   W 210168704 + 128 [0]
  8,32   0    12040     0.155852734     0  D   W 210168960 + 128 [swapper]
  8,32   1    12414     0.157305657     0  C   W 210168832 + 128 [0]
  8,32   1    12415     0.157407880     0  D   W 210169088 + 128 [swapper]
  8,32   1    12416     0.159789046     0  C   W 210168960 + 128 [0]
  8,32   1    12417     0.159887321     0  D   W 210169216 + 128 [swapper]
  8,32   0    12041     0.161590753     0  C   W 210169088 + 128 [0]
  8,32   0    12042     0.161690654     0  D   W 210169344 + 128 [swapper]
  8,32   1    12418     0.163013019     0  C   W 210169216 + 128 [0]
  8,32   1    12419     0.163115584     0  D   W 210169472 + 128 [swapper]
  8,32   0    12043     0.164519096     0  C   W 210169344 + 128 [0]
  8,32   0    12044     0.164622195     0  D   W 210169600 + 128 [swapper]
  8,32   1    12420     0.167173282     0  C   W 210169472 + 128 [0]
  8,32   1    12421     0.167276254     0  D   W 210169728 + 128 [swapper]
  8,32   1    12422     0.168925764     0  C   W 210169600 + 128 [0]
  8,32   1    12423     0.169027981     0  D   W 210169856 + 128 [swapper]
  8,32   0    12045     0.170351065     0  C   W 210169728 + 128 [0]
  8,32   0    12046     0.170455910     0  D   W 210169984 + 128 [swapper]
  8,32   1    12424     0.171907579     0  C   W 210169856 + 128 [0]
  8,32   1    12425     0.172013444     0  D   W 210170112 + 128 [swapper]
  8,32   0    12047     0.174390555     0  C   W 210169984 + 128 [0]
  8,32   0    12048     0.174492477     0  D   W 210170240 + 128 [swapper]
  8,32   1    12426     0.176133642     0  C   W 210170112 + 128 [0]
  8,32   1    12427     0.176237095     0  D   W 210170368 + 128 [swapper]
  8,32   1    12428     0.177558824     0  C   W 210170240 + 128 [0]
  8,32   1    12429     0.177664029     0  D   W 210170496 + 128 [swapper]
  8,32   0    12049     0.179067469     0  C   W 210170368 + 128 [0]
  8,32   0    12050     0.179169668     0  D   W 210170624 + 128 [swapper]
  8,32   1    12430     0.181612304     0  C   W 210170496 + 128 [0]
  8,32   1    12431     0.181714443     0  D   W 210170752 + 128 [swapper]
  8,32   0    12051     0.183355913     0  C   W 210170624 + 128 [0]
  8,32   0    12052     0.183459354     0  D   W 210170880 + 128 [swapper]
  8,32   1    12432     0.184783537     0  C   W 210170752 + 128 [0]
  8,32   1    12433     0.184880864     0  D   W 210171008 + 128 [swapper]
  8,32   0    12053     0.186333961     0  C   W 210170880 + 128 [0]
  8,32   0    12054     0.186428577     0  D   W 210171136 + 128 [swapper]
  8,32   0    12055     0.188818497     0  C   W 210171008 + 128 [0]
  8,32   0    12056     0.188920809     0  D   W 210171264 + 128 [swapper]
  8,32   1    12434     0.190562105     0  C   W 210171136 + 128 [0]
  8,32   1    12435     0.190666416     0  D   W 210171392 + 128 [swapper]
  8,32   0    12057     0.191988452     0  C   W 210171264 + 128 [0]
  8,32   0    12058     0.192089505     0  D   W 210171520 + 128 [swapper]
  8,32   1    12436     0.193493089     0  C   W 210171392 + 128 [0]
  8,32   1    12437     0.193595240     0  D   W 210171648 + 128 [swapper]
  8,32   1    12438     0.196087478     0  C   W 210171520 + 128 [0]
  8,32   1    12439     0.196191783     0  D   W 210171776 + 128 [swapper]
  8,32   0    12059     0.197831496     0  C   W 210171648 + 128 [0]
  8,32   0    12060     0.197933803     0  D   W 210171904 + 128 [swapper]
  8,32   1    12440     0.199257319     0  C   W 210171776 + 128 [0]
  8,32   1    12441     0.199360568     0  D   W 210172032 + 128 [swapper]
  8,32   0    12061     0.200814049     0  C   W 210171904 + 128 [0]
  8,32   0    12062     0.200913626     0  D   W 210172160 + 128 [swapper]
  8,32   1    12442     0.203309419     0  C   W 210172032 + 128 [0]
  8,32   1    12443     0.203412830     0  D   W 210172288 + 128 [swapper]
  8,32   0    12063     0.205060570     0  C   W 210172160 + 128 [0]
  8,32   0    12064     0.205162907     0  D   W 210172416 + 128 [swapper]
  8,32   0    12065     0.206486099     0  C   W 210172288 + 128 [0]
  8,32   0    12066     0.206587944     0  D   W 210172544 + 128 [swapper]
  8,32   1    12444     0.207990905     0  C   W 210172416 + 128 [0]
  8,32   1    12445     0.208091646     0  D   W 210172672 + 128 [swapper]
  8,32   0    12067     0.210583968     0  C   W 210172544 + 128 [0]
  8,32   0    12068     0.210680377     0  D   W 210172800 + 128 [swapper]
  8,32   1    12446     0.212327398     0  C   W 210172672 + 128 [0]
  8,32   1    12447     0.212428457     0  D   W 210172928 + 128 [swapper]
  8,32   0    12069     0.213751590     0  C   W 210172800 + 128 [0]
  8,32   0    12070     0.213853765     0  D   W 210173056 + 128 [swapper]
  8,32   0    12071     0.215305974     0  C   W 210172928 + 128 [0]
  8,32   0    12072     0.215410602     0  D   W 210173184 + 128 [swapper]
  8,32   1    12448     0.217849597     0  C   W 210173056 + 128 [0]
  8,32   1    12449     0.217949042     0  D   W 210173312 + 128 [swapper]
  8,32   0    12073     0.219601719     0  C   W 210173184 + 128 [0]
  8,32   0    12074     0.219703342     0  D   W 210173440 + 128 [swapper]
  8,32   1    12450     0.221025743     0  C   W 210173312 + 128 [0]
  8,32   1    12451     0.221127756     0  D   W 210173568 + 128 [swapper]
  8,32   0    12075     0.222580660     0  C   W 210173440 + 128 [0]
  8,32   0    12076     0.222676374     0  D   W 210173696 + 128 [swapper]
  8,32   0    12077     0.225014766     0  C   W 210173568 + 128 [0]
  8,32   0    12078     0.225111955     0  D   W 210173824 + 128 [swapper]
  8,32   1    12452     0.226711310     0  C   W 210173696 + 128 [0]
  8,32   1    12453     0.226812111     0  D   W 210173952 + 128 [swapper]
  8,32   0    12079     0.229019147     0  C   W 210173824 + 128 [0]
  8,32   0    12080     0.229119366     0  D   W 210174080 + 128 [swapper]
  8,32   1    12454     0.230581959     0  C   W 210173952 + 128 [0]
  8,32   1    12455     0.230682370     0  D   W 210174208 + 128 [swapper]
  8,32   0    12081     0.232003758     0  C   W 210174080 + 128 [0]
  8,32   0    12082     0.232102141     0  D   W 210174336 + 128 [swapper]
  8,32   0    12083     0.233503956     0  C   W 210174208 + 128 [0]
  8,32   0    12084     0.233600131     0  D   W 210174464 + 128 [swapper]
  8,32   1    12456     0.234921110     0  C   W 210174336 + 128 [0]
  8,32   1    12457     0.235018282     0  D   W 210174592 + 128 [swapper]
  8,32   0    12085     0.236420624     0  C   W 210174464 + 128 [0]
  8,32   0    12086     0.236519661     0  D   W 210174720 + 128 [swapper]
  8,32   1    12458     0.238714991     0  C   W 210174592 + 128 [0]
  8,32   1    12459     0.238812841     0  D   W 210174848 + 128 [swapper]
  8,32   0    12087     0.240419071     0  C   W 210174720 + 128 [0]
  8,32   0    12088     0.240514299     0  D   W 210174976 + 128 [swapper]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: I/O and pdflush
  2009-09-04  2:21                               ` Wu Fengguang
@ 2009-09-04  2:34                                 ` Wu Fengguang
  0 siblings, 0 replies; 16+ messages in thread
From: Wu Fengguang @ 2009-09-04  2:34 UTC (permalink / raw)
  To: Fernando Silveira; +Cc: LKML

On Fri, Sep 04, 2009 at 10:21:23AM +0800, Wu Fengguang wrote:
> On Fri, Sep 04, 2009 at 04:46:14AM +0800, Fernando Silveira wrote:
> > On Wed, Sep 2, 2009 at 09:50, Wu Fengguang<fengguang.wu@intel.com> wrote:
> > > On Wed, Sep 02, 2009 at 07:29:37PM +0800, Fernando Silveira wrote:
> > >> On Wed, Sep 2, 2009 at 00:05, Wu Fengguang<fengguang.wu@intel.com> wrote:
> > >> > Is it tightly related to two interleaved write streams (the performance drop
> > >> > starts and stops with the second write stream)?  If so, it sounds like a SSD
> > >> > hardware/firmware problem.
> > >>
> > >> By write stream you mean more than one sdc file descriptor writing? I
> > >> don't usually do that, except for some tests (e.g. the "20090831_0908
> > >> iostat_x_1_5-80mbps_3" test). In that last test I did not do that.
> > >>
> > >> The performance drops even when the whole system is idle, with no
> > >> process other than the essential and my program that does nothing but
> > >> open("/dev/sdc", O_WRONLY) and loops in write(fd, buf, 4194304).
> > >
> > > Yes, I mean concurrently writing to two large files.
> > 
> > Sorry, but that cannot be happening in my tests. As I said before, the
> > only processes running are "init", "udevd", "sshd", "getty", "bash"
> > and "ddv". In the "dmesg" logs you can see that the only processes
> > that are writing to the disk are "ddv" and "pdflush". I just cannot
> > see another stream being written.
> > 
> > Please tell me if I'm missing something.
> 
> OK, so performance can go bad even for one single dd command?
> 
> > > Note that the
> > > low level disk write start/stop time is normally not equal to
> > > application(eg. dd) write start/stop time because of delayed write.
> > >
> > > Passing "oflag=direct" to dd can make the two times aligned.
> > 
> > I was aware of that, however I tried it and now the problem seems to
> > be much more difficult to be reproduced. I've being trying for a
> > couple of hours and so far all I could see was a small performance
> > drop from 80MB/s (now almost steady) to 60MB/s for some seconds
> > (approximately 16).
> > 
> > Could this possibly mean something else to you?
> 
> So performance is good on two concurrent O_DIRECT dd commands?
> 
> > > These commands can show the block IO internals and help confirm
> > > whether it's the kernel or hardware's fault:
> > >
> > >        blktrace -w1 /dev/sdc # do this while throughput is 25MB/s
> > >        blkparse sdc
> > 
> > I did that but I couldn't identify whose fault is. I don't understand
> > much about I/O internals (still trying to) and I wonder if you could
> > check it for me, please.
> > 
> > The new trace logs are in this tarball:
> > 
> > http://rootshell.be/~swrh/ssd-tests/log-20090903_1323-MK020903042C0001B-prosar_fcc2-bs_4M-0-blktrace_25mbps.tar.gz
> 
> The kernel seems to do its job fine. I do see one obvious pattern(*).
> The driver seems to complete IO requests _in batches_, which in turn
> make the kernel issue IO requests in batches. Which might choke the
> hardware/firmware in some subtle way.
> 
> In direct dd(s), the io queue is limited in size, so this kind of
> batching effect will not be obvious, which could explain why it
> performs better. 

Fernando, to confirm this, you can try running dd with lowered queue depth:

        echo 32 > /sys/block/sdc/queue/nr_requests

Thanks,
Fengguang

> ---
> 
> (*) `blkparse sdc` shows repeated patterns of
> 
> [thousands of A-Q-M sequences by ddv or pdflush]
>   8,32   0    11992     0.093682688  2417  A   W 210184323 + 1 <- (8,34) 194119323
>   8,32   0    11993     0.093682832  2417  Q   W 210184323 + 1 [ddv]
>   8,32   0    11994     0.093683066  2417  M   W 210184323 + 1 [ddv]
>   8,32   0    11995     0.093683552  2417  A   W 210184324 + 1 <- (8,34) 194119324
>   8,32   0    11996     0.093683702  2417  Q   W 210184324 + 1 [ddv]
>   8,32   0    11997     0.093683930  2417  M   W 210184324 + 1 [ddv]
>   8,32   0    11998     0.093684422  2417  A   W 210184325 + 1 <- (8,34) 194119325
>   8,32   0    11999     0.093684566  2417  Q   W 210184325 + 1 [ddv]
>   8,32   0    12000     0.093684799  2417  M   W 210184325 + 1 [ddv]
>   8,32   0    12001     0.093685285  2417  A   W 210184326 + 1 <- (8,34) 194119326
>   8,32   0    12002     0.093685429  2417  Q   W 210184326 + 1 [ddv]
>   8,32   0    12003     0.093685663  2417  M   W 210184326 + 1 [ddv]
>   8,32   0    12004     0.093686155  2417  A   W 210184327 + 1 <- (8,34) 194119327
>   8,32   0    12005     0.093686299  2417  Q   W 210184327 + 1 [ddv]
>   8,32   0    12006     0.093686533  2417  M   W 210184327 + 1 [ddv]
> [near 100 C-D sequences]
>   8,32   1    12378     0.094161760     0  C   W 210164352 + 128 [0]
>   8,32   1    12379     0.094318451     0  D   W 210164608 + 128 [swapper]
>   8,32   0    12007     0.095964571  2417  C   W 210164480 + 128 [0]
>   8,32   0    12008     0.096097816  2417  D   W 210164736 + 128 [ddv]
>   8,32   0    12009     0.097421849   211  C   W 210164608 + 128 [0]
>   8,32   0    12010     0.097566626   211  D   W 210164864 + 128 [kswapd0]
>   8,32   1    12380     0.099029562     0  C   W 210164736 + 128 [0]
>   8,32   1    12381     0.099199427     0  D   W 210164992 + 128 [swapper]
>   8,32   0    12011     0.101573454  2417  C   W 210164864 + 128 [0]
>   8,32   0    12012     0.101722292  2417  D   W 210165120 + 128 [ddv]
>   8,32   1    12382     0.103317406     0  C   W 210164992 + 128 [0]
>   8,32   1    12383     0.103491195     0  D   W 210165248 + 128 [swapper]
>   8,32   0    12013     0.104819283  2417  C   W 210165120 + 128 [0]
>   8,32   0    12014     0.104966411  2417  D   W 210165376 + 128 [ddv]
>   8,32   0    12015     0.106378658  2417  C   W 210165248 + 128 [0]
>   8,32   0    12016     0.106545098  2417  D   W 210165504 + 128 [ddv]
>   8,32   1    12384     0.108959520     0  C   W 210165376 + 128 [0]
>   8,32   1    12385     0.109088854     0  D   W 210165632 + 128 [swapper]
>   8,32   0    12017     0.110761352     0  C   W 210165504 + 128 [0]
>   8,32   0    12018     0.110881820     0  D   W 210165760 + 128 [swapper]
>   8,32   1    12386     0.112203050     0  C   W 210165632 + 128 [0]
>   8,32   1    12387     0.112310847     0  D   W 210165888 + 128 [swapper]
>   8,32   0    12019     0.113762264     0  C   W 210165760 + 128 [0]
>   8,32   0    12020     0.113868915     0  D   W 210166016 + 128 [swapper]
>   8,32   0    12021     0.116295958     0  C   W 210165888 + 128 [0]
>   8,32   0    12022     0.116405740     0  D   W 210166144 + 128 [swapper]
>   8,32   1    12388     0.118039279     0  C   W 210166016 + 128 [0]
>   8,32   1    12389     0.118146452     0  D   W 210166272 + 128 [swapper]
>   8,32   0    12023     0.119468271     0  C   W 210166144 + 128 [0]
>   8,32   0    12024     0.119569714     0  D   W 210166400 + 128 [swapper]
>   8,32   1    12390     0.120972296     0  C   W 210166272 + 128 [0]
>   8,32   1    12391     0.121081701     0  D   W 210166528 + 128 [swapper]
>   8,32   1    12392     0.123565552     0  C   W 210166400 + 128 [0]
>   8,32   1    12393     0.123674146     0  D   W 210166656 + 128 [swapper]
>   8,32   0    12025     0.125309227     0  C   W 210166528 + 128 [0]
>   8,32   0    12026     0.125417174     0  D   W 210166784 + 128 [swapper]
>   8,32   1    12394     0.126739754     0  C   W 210166656 + 128 [0]
>   8,32   1    12395     0.126845037     0  D   W 210166912 + 128 [swapper]
>   8,32   0    12027     0.128297684     0  C   W 210166784 + 128 [0]
>   8,32   0    12028     0.128407466     0  D   W 210167040 + 128 [swapper]
>   8,32   1    12396     0.130840911     0  C   W 210166912 + 128 [0]
>   8,32   1    12397     0.130947693     0  D   W 210167168 + 128 [swapper]
>   8,32   1    12398     0.132628003     0  C   W 210167040 + 128 [0]
>   8,32   1    12399     0.132732236     0  D   W 210167296 + 128 [swapper]
>   8,32   0    12029     0.134054889     0  C   W 210167168 + 128 [0]
>   8,32   0    12030     0.134160082     0  D   W 210167424 + 128 [swapper]
>   8,32   1    12400     0.135563066     0  C   W 210167296 + 128 [0]
>   8,32   1    12401     0.135664905     0  D   W 210167552 + 128 [swapper]
>   8,32   0    12031     0.138157000     0  C   W 210167424 + 128 [0]
>   8,32   0    12032     0.138263110     0  D   W 210167680 + 128 [swapper]
>   8,32   1    12402     0.139842121     0  C   W 210167552 + 128 [0]
>   8,32   1    12403     0.139945646     0  D   W 210167808 + 128 [swapper]
>   8,32   1    12404     0.141269145   210  C   W 210167680 + 128 [0]
>   8,32   1    12405     0.141377805   210  D   W 210167936 + 128 [pdflush]
>   8,32   0    12033     0.142830026     0  C   W 210167808 + 128 [0]
>   8,32   0    12034     0.142936449     0  D   W 210168064 + 128 [swapper]
>   8,32   1    12406     0.145314280     0  C   W 210167936 + 128 [0]
>   8,32   1    12407     0.145418662     0  D   W 210168192 + 128 [swapper]
>   8,32   0    12035     0.147057325     0  C   W 210168064 + 128 [0]
>   8,32   0    12036     0.147159590     0  D   W 210168320 + 128 [swapper]
>   8,32   1    12408     0.148482753     0  C   W 210168192 + 128 [0]
>   8,32   1    12409     0.148587813     0  D   W 210168448 + 128 [swapper]
>   8,32   1    12410     0.149991158     0  C   W 210168320 + 128 [0]
>   8,32   1    12411     0.150096956     0  D   W 210168576 + 128 [swapper]
>   8,32   0    12037     0.152584743     0  C   W 210168448 + 128 [0]
>   8,32   0    12038     0.152690919     0  D   W 210168704 + 128 [swapper]
>   8,32   1    12412     0.154328700     0  C   W 210168576 + 128 [0]
>   8,32   1    12413     0.154431589     0  D   W 210168832 + 128 [swapper]
>   8,32   0    12039     0.155754320     0  C   W 210168704 + 128 [0]
>   8,32   0    12040     0.155852734     0  D   W 210168960 + 128 [swapper]
>   8,32   1    12414     0.157305657     0  C   W 210168832 + 128 [0]
>   8,32   1    12415     0.157407880     0  D   W 210169088 + 128 [swapper]
>   8,32   1    12416     0.159789046     0  C   W 210168960 + 128 [0]
>   8,32   1    12417     0.159887321     0  D   W 210169216 + 128 [swapper]
>   8,32   0    12041     0.161590753     0  C   W 210169088 + 128 [0]
>   8,32   0    12042     0.161690654     0  D   W 210169344 + 128 [swapper]
>   8,32   1    12418     0.163013019     0  C   W 210169216 + 128 [0]
>   8,32   1    12419     0.163115584     0  D   W 210169472 + 128 [swapper]
>   8,32   0    12043     0.164519096     0  C   W 210169344 + 128 [0]
>   8,32   0    12044     0.164622195     0  D   W 210169600 + 128 [swapper]
>   8,32   1    12420     0.167173282     0  C   W 210169472 + 128 [0]
>   8,32   1    12421     0.167276254     0  D   W 210169728 + 128 [swapper]
>   8,32   1    12422     0.168925764     0  C   W 210169600 + 128 [0]
>   8,32   1    12423     0.169027981     0  D   W 210169856 + 128 [swapper]
>   8,32   0    12045     0.170351065     0  C   W 210169728 + 128 [0]
>   8,32   0    12046     0.170455910     0  D   W 210169984 + 128 [swapper]
>   8,32   1    12424     0.171907579     0  C   W 210169856 + 128 [0]
>   8,32   1    12425     0.172013444     0  D   W 210170112 + 128 [swapper]
>   8,32   0    12047     0.174390555     0  C   W 210169984 + 128 [0]
>   8,32   0    12048     0.174492477     0  D   W 210170240 + 128 [swapper]
>   8,32   1    12426     0.176133642     0  C   W 210170112 + 128 [0]
>   8,32   1    12427     0.176237095     0  D   W 210170368 + 128 [swapper]
>   8,32   1    12428     0.177558824     0  C   W 210170240 + 128 [0]
>   8,32   1    12429     0.177664029     0  D   W 210170496 + 128 [swapper]
>   8,32   0    12049     0.179067469     0  C   W 210170368 + 128 [0]
>   8,32   0    12050     0.179169668     0  D   W 210170624 + 128 [swapper]
>   8,32   1    12430     0.181612304     0  C   W 210170496 + 128 [0]
>   8,32   1    12431     0.181714443     0  D   W 210170752 + 128 [swapper]
>   8,32   0    12051     0.183355913     0  C   W 210170624 + 128 [0]
>   8,32   0    12052     0.183459354     0  D   W 210170880 + 128 [swapper]
>   8,32   1    12432     0.184783537     0  C   W 210170752 + 128 [0]
>   8,32   1    12433     0.184880864     0  D   W 210171008 + 128 [swapper]
>   8,32   0    12053     0.186333961     0  C   W 210170880 + 128 [0]
>   8,32   0    12054     0.186428577     0  D   W 210171136 + 128 [swapper]
>   8,32   0    12055     0.188818497     0  C   W 210171008 + 128 [0]
>   8,32   0    12056     0.188920809     0  D   W 210171264 + 128 [swapper]
>   8,32   1    12434     0.190562105     0  C   W 210171136 + 128 [0]
>   8,32   1    12435     0.190666416     0  D   W 210171392 + 128 [swapper]
>   8,32   0    12057     0.191988452     0  C   W 210171264 + 128 [0]
>   8,32   0    12058     0.192089505     0  D   W 210171520 + 128 [swapper]
>   8,32   1    12436     0.193493089     0  C   W 210171392 + 128 [0]
>   8,32   1    12437     0.193595240     0  D   W 210171648 + 128 [swapper]
>   8,32   1    12438     0.196087478     0  C   W 210171520 + 128 [0]
>   8,32   1    12439     0.196191783     0  D   W 210171776 + 128 [swapper]
>   8,32   0    12059     0.197831496     0  C   W 210171648 + 128 [0]
>   8,32   0    12060     0.197933803     0  D   W 210171904 + 128 [swapper]
>   8,32   1    12440     0.199257319     0  C   W 210171776 + 128 [0]
>   8,32   1    12441     0.199360568     0  D   W 210172032 + 128 [swapper]
>   8,32   0    12061     0.200814049     0  C   W 210171904 + 128 [0]
>   8,32   0    12062     0.200913626     0  D   W 210172160 + 128 [swapper]
>   8,32   1    12442     0.203309419     0  C   W 210172032 + 128 [0]
>   8,32   1    12443     0.203412830     0  D   W 210172288 + 128 [swapper]
>   8,32   0    12063     0.205060570     0  C   W 210172160 + 128 [0]
>   8,32   0    12064     0.205162907     0  D   W 210172416 + 128 [swapper]
>   8,32   0    12065     0.206486099     0  C   W 210172288 + 128 [0]
>   8,32   0    12066     0.206587944     0  D   W 210172544 + 128 [swapper]
>   8,32   1    12444     0.207990905     0  C   W 210172416 + 128 [0]
>   8,32   1    12445     0.208091646     0  D   W 210172672 + 128 [swapper]
>   8,32   0    12067     0.210583968     0  C   W 210172544 + 128 [0]
>   8,32   0    12068     0.210680377     0  D   W 210172800 + 128 [swapper]
>   8,32   1    12446     0.212327398     0  C   W 210172672 + 128 [0]
>   8,32   1    12447     0.212428457     0  D   W 210172928 + 128 [swapper]
>   8,32   0    12069     0.213751590     0  C   W 210172800 + 128 [0]
>   8,32   0    12070     0.213853765     0  D   W 210173056 + 128 [swapper]
>   8,32   0    12071     0.215305974     0  C   W 210172928 + 128 [0]
>   8,32   0    12072     0.215410602     0  D   W 210173184 + 128 [swapper]
>   8,32   1    12448     0.217849597     0  C   W 210173056 + 128 [0]
>   8,32   1    12449     0.217949042     0  D   W 210173312 + 128 [swapper]
>   8,32   0    12073     0.219601719     0  C   W 210173184 + 128 [0]
>   8,32   0    12074     0.219703342     0  D   W 210173440 + 128 [swapper]
>   8,32   1    12450     0.221025743     0  C   W 210173312 + 128 [0]
>   8,32   1    12451     0.221127756     0  D   W 210173568 + 128 [swapper]
>   8,32   0    12075     0.222580660     0  C   W 210173440 + 128 [0]
>   8,32   0    12076     0.222676374     0  D   W 210173696 + 128 [swapper]
>   8,32   0    12077     0.225014766     0  C   W 210173568 + 128 [0]
>   8,32   0    12078     0.225111955     0  D   W 210173824 + 128 [swapper]
>   8,32   1    12452     0.226711310     0  C   W 210173696 + 128 [0]
>   8,32   1    12453     0.226812111     0  D   W 210173952 + 128 [swapper]
>   8,32   0    12079     0.229019147     0  C   W 210173824 + 128 [0]
>   8,32   0    12080     0.229119366     0  D   W 210174080 + 128 [swapper]
>   8,32   1    12454     0.230581959     0  C   W 210173952 + 128 [0]
>   8,32   1    12455     0.230682370     0  D   W 210174208 + 128 [swapper]
>   8,32   0    12081     0.232003758     0  C   W 210174080 + 128 [0]
>   8,32   0    12082     0.232102141     0  D   W 210174336 + 128 [swapper]
>   8,32   0    12083     0.233503956     0  C   W 210174208 + 128 [0]
>   8,32   0    12084     0.233600131     0  D   W 210174464 + 128 [swapper]
>   8,32   1    12456     0.234921110     0  C   W 210174336 + 128 [0]
>   8,32   1    12457     0.235018282     0  D   W 210174592 + 128 [swapper]
>   8,32   0    12085     0.236420624     0  C   W 210174464 + 128 [0]
>   8,32   0    12086     0.236519661     0  D   W 210174720 + 128 [swapper]
>   8,32   1    12458     0.238714991     0  C   W 210174592 + 128 [0]
>   8,32   1    12459     0.238812841     0  D   W 210174848 + 128 [swapper]
>   8,32   0    12087     0.240419071     0  C   W 210174720 + 128 [0]
>   8,32   0    12088     0.240514299     0  D   W 210174976 + 128 [swapper]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: I/O and pdflush
  2009-08-31 21:57 ` Daniel J Blueman
@ 2009-09-01 14:33   ` Fernando Silveira
  0 siblings, 0 replies; 16+ messages in thread
From: Fernando Silveira @ 2009-09-01 14:33 UTC (permalink / raw)
  To: Daniel J Blueman; +Cc: Linux Kernel

On Mon, Aug 31, 2009 at 18:57, Daniel J Blueman<daniel.blueman@gmail.com> wrote:
> On Jul 11, 6:30 pm, Fernando Silveira <fsilve...@gmail.com> wrote:
>> The problem is that after some time of data writing at 70MB/s, it
>> eventually falls down to about 25MB/s and does not get up again until
>> a loooong time has passed (from 1 to 30 minutes). This happens much
>> more often when "vm.dirty_*" settings are default (30 secs to expire,
>> 5 secs for writeback, 10% and 40% for background and normal ratio),
>> and when I set them to 1 second or even 0, the problem happens much
>> less often and the sticking period of 25MB/s is much lower.
> [snip]
>
> Generally, this sounds symptomatic of the (now well-documented)
> internal block management of the SSD delaying writes significantly
> over long use periods - the output from eg 'vmstat 3' will show all
> the time spent in I/O wait of course.
>
> In your case, the OCZ Core v2 SSD uses the famed JMicron JMF602 which
> chokes up with outstanding write requests, destroying latency and
> throughput with it. OCZ won't be releasing newer firmware for the Core
> v2, so one last-ditch option is to install the newer stock (non-OCZ
> validated, thus invalidating the warranty) JMicron firmware, which
> alleviates this somewhat. I've found this successful, though it's
> tricky to find!

Thanks, I'll keep that in mind. If no other safer solution else helps,
I'll update the JMicron.

> One other idea is to ensure the filesystem starts on an (at least)
> 128KB boundary, so as to submit complete erase blocks (usually 128KB).

I don't use any kind of filesystem, just plain sequential data dump. I
can change the block size for each write(2), but I'm already using a
huge one: 4MB. I tried lower and higher values before but it seems
that the 70~80MB/s rate is only reached with blocks of 1MB or bigger.

-- 
Fernando Silveira <fsilveira@gmail.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: I/O and pdflush
       [not found] <cWOyL-3Ys-15@gated-at.bofh.it>
@ 2009-08-31 21:57 ` Daniel J Blueman
  2009-09-01 14:33   ` Fernando Silveira
  0 siblings, 1 reply; 16+ messages in thread
From: Daniel J Blueman @ 2009-08-31 21:57 UTC (permalink / raw)
  To: Fernando Silveira; +Cc: Linux Kernel

On Jul 11, 6:30 pm, Fernando Silveira <fsilve...@gmail.com> wrote:
> The problem is that after some time of data writing at 70MB/s, it
> eventually falls down to about 25MB/s and does not get up again until
> a loooong time has passed (from 1 to 30 minutes). This happens much
> more often when "vm.dirty_*" settings are default (30 secs to expire,
> 5 secs for writeback, 10% and 40% for background and normal ratio),
> and when I set them to 1 second or even 0, the problem happens much
> less often and the sticking period of 25MB/s is much lower.
[snip]

Generally, this sounds symptomatic of the (now well-documented)
internal block management of the SSD delaying writes significantly
over long use periods - the output from eg 'vmstat 3' will show all
the time spent in I/O wait of course.

In your case, the OCZ Core v2 SSD uses the famed JMicron JMF602 which
chokes up with outstanding write requests, destroying latency and
throughput with it. OCZ won't be releasing newer firmware for the Core
v2, so one last-ditch option is to install the newer stock (non-OCZ
validated, thus invalidating the warranty) JMicron firmware, which
alleviates this somewhat. I've found this successful, though it's
tricky to find!

One other idea is to ensure the filesystem starts on an (at least)
128KB boundary, so as to submit complete erase blocks (usually 128KB).

Thanks,
  Daniel
--
Daniel J Blueman

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2009-09-04  2:34 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-07-11 17:27 I/O and pdflush Fernando Silveira
2009-07-12  8:04 ` Wu Fengguang
2009-08-28 21:48   ` Fernando Silveira
2009-08-29 10:12     ` Wu Fengguang
2009-08-29 10:21       ` Wu Fengguang
2009-08-31 13:24         ` Fernando Silveira
2009-08-31 14:00           ` Wu Fengguang
2009-08-31 14:01             ` Wu Fengguang
2009-08-31 14:07               ` Wu Fengguang
2009-08-31 14:33                 ` Fernando Silveira
2009-09-01  8:14                   ` Wu Fengguang
     [not found]                     ` <6afc6d4a0909010710l2cf77fbbmb1ab192ed12a7efc@mail.gmail.com>
2009-09-02  3:05                       ` Wu Fengguang
     [not found]                         ` <6afc6d4a0909020429l2bfecee9xd00527fcaa323751@mail.gmail.com>
     [not found]                           ` <20090902125057.GA7982@localhost>
     [not found]                             ` <6afc6d4a0909031346qda0b17coe4c60250fcac827f@mail.gmail.com>
2009-09-04  2:21                               ` Wu Fengguang
2009-09-04  2:34                                 ` Wu Fengguang
     [not found] <cWOyL-3Ys-15@gated-at.bofh.it>
2009-08-31 21:57 ` Daniel J Blueman
2009-09-01 14:33   ` Fernando Silveira

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.