linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Debugging system freezes on filesystem writes
@ 2012-10-28 22:39 Marcus Sundman
  2012-11-01 19:01 ` Jan Kara
  0 siblings, 1 reply; 34+ messages in thread
From: Marcus Sundman @ 2012-10-28 22:39 UTC (permalink / raw)
  To: linux-kernel

Hi,

I have a big problem with the system freezing and would appreciate any 
help on debugging this and pinpointing where exactly the problem is, so 
it could be fixed.

So, whenever I write to the disk the system comes to a crawl or freezes 
altogether. This happens even when the writing processes are running on 
nice '19' and ionice 'idle'. (E.g. a 10 second compile could freeze the 
system for several minutes, rendering the computer pretty much unusable 
for anything interesting.)

Here you can see a 20 second gap even in superhigh priority:
# nice -n -20 ionice -c1 iostat -t -m -d -x 1 > http://pastebin.com/j5qnh2VV

I'm currently running 3.5.0-17-lowlatency on the ZenBook UX31E, using 
the NOOP I/O scheduler on the SanDisk SSD U100. The chipset seems to be 
Intel QS67. I've had this same problem on 3.2.0 generic and lowlatency 
kernels.

The syslog says nothing relevant before/during/after these freezes.

Any ideas on finding the culprit?


Regards,
Marcus


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2012-10-28 22:39 Debugging system freezes on filesystem writes Marcus Sundman
@ 2012-11-01 19:01 ` Jan Kara
  2012-11-02  2:19   ` Marcus Sundman
  0 siblings, 1 reply; 34+ messages in thread
From: Jan Kara @ 2012-11-01 19:01 UTC (permalink / raw)
  To: Marcus Sundman; +Cc: linux-kernel

On Mon 29-10-12 00:39:46, Marcus Sundman wrote:
  Hello,

> I have a big problem with the system freezing and would appreciate
> any help on debugging this and pinpointing where exactly the problem
> is, so it could be fixed.
> 
> So, whenever I write to the disk the system comes to a crawl or
> freezes altogether. This happens even when the writing processes are
> running on nice '19' and ionice 'idle'. (E.g. a 10 second compile
> could freeze the system for several minutes, rendering the computer
> pretty much unusable for anything interesting.)
> 
> Here you can see a 20 second gap even in superhigh priority:
> # nice -n -20 ionice -c1 iostat -t -m -d -x 1 > http://pastebin.com/j5qnh2VV
> 
> I'm currently running 3.5.0-17-lowlatency on the ZenBook UX31E,
> using the NOOP I/O scheduler on the SanDisk SSD U100. The chipset
> seems to be Intel QS67. I've had this same problem on 3.2.0 generic
> and lowlatency kernels.
  These are Ubuntu kernels. Any chance to reproduce the issue with vanilla
kernels - i.e. kernels without any Ubuntu patches? Also when you speak of
system freezing - can you e.g. type to terminal while the system is frozen?
Or is it just that running commands freezes? And how much free memory do
you have while the system is frozen? Finally, can you trigger the freeze by
something simpler than compilation - e.g. does
  dd if=/dev/zero of=/tmp bs=1M
trigger the freeze as well?

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2012-11-01 19:01 ` Jan Kara
@ 2012-11-02  2:19   ` Marcus Sundman
  2012-11-07 16:17     ` Jan Kara
  0 siblings, 1 reply; 34+ messages in thread
From: Marcus Sundman @ 2012-11-02  2:19 UTC (permalink / raw)
  To: linux-kernel; +Cc: jack

On 01.11.2012 21:01, Jan Kara wrote:
> On Mon 29-10-12 00:39:46, Marcus Sundman wrote:
>    Hello,
>
>> I have a big problem with the system freezing and would appreciate
>> any help on debugging this and pinpointing where exactly the problem
>> is, so it could be fixed.
>>
>> So, whenever I write to the disk the system comes to a crawl or
>> freezes altogether. This happens even when the writing processes are
>> running on nice '19' and ionice 'idle'. (E.g. a 10 second compile
>> could freeze the system for several minutes, rendering the computer
>> pretty much unusable for anything interesting.)
>>
>> Here you can see a 20 second gap even in superhigh priority:
>> # nice -n -20 ionice -c1 iostat -t -m -d -x 1 > http://pastebin.com/j5qnh2VV
>>
>> I'm currently running 3.5.0-17-lowlatency on the ZenBook UX31E,
>> using the NOOP I/O scheduler on the SanDisk SSD U100. The chipset
>> seems to be Intel QS67. I've had this same problem on 3.2.0 generic
>> and lowlatency kernels.
>    These are Ubuntu kernels. Any chance to reproduce the issue with vanilla
> kernels - i.e. kernels without any Ubuntu patches?

I'm afraid it's going to take a week to compile a kernel with this 
freezing going on, but I suppose I could get another computer to do the 
compiling. Or should I install some pre-compiled version? If so, which one?

> Also when you speak of
> system freezing - can you e.g. type to terminal while the system is frozen?
> Or is it just that running commands freezes?

Typing usually doesn't work very well. It works for a word or two and 
then stops working for a while and if I continue to type then when it 
resumes only the last few characters appears. Typing in the console is a 
bit better than in a terminal in X (not counting the several minutes it 
can take to switch to the console (Ctrl-Alt-F1)).

> And how much free memory do
> you have while the system is frozen?

It varies. Or it depends on how you look at it, usually my RAM is full, 
but mostly by "buffers" (whatever that is in practice).
My swap is close to zero, because I keep swappiness at 1, or else the 
freezing gets totally out of control.
And I've disabled journaling, because journaling also makes it much, 
much worse. (Using ext4, btw.)

> Finally, can you trigger the freeze by
> something simpler than compilation - e.g. does
>    dd if=/dev/zero of=/tmp bs=1M
> trigger the freeze as well?

Yes, that command sure does trigger the freezes. However, if there's 
nothing else going on then that command doesn't make the system freeze 
totally (at least not immediately), but if I do some other filesystem 
activity (e.g., ls) at the same time then the freezing starts.

Also, and this might be important, according to iotop there is almost no 
disk writing going on during the freeze. (Occasionally there are a few 
MB/s, but mostly it's 0-200 kB/s.) Well, at least when an iotop running 
on nice -20 hasn't frozen completely, which it does during the more 
severe freezes.


Regards,
Marcus


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2012-11-02  2:19   ` Marcus Sundman
@ 2012-11-07 16:17     ` Jan Kara
  2012-11-08 23:41       ` Marcus Sundman
  0 siblings, 1 reply; 34+ messages in thread
From: Jan Kara @ 2012-11-07 16:17 UTC (permalink / raw)
  To: Marcus Sundman; +Cc: linux-kernel, jack

On Fri 02-11-12 04:19:24, Marcus Sundman wrote:
> On 01.11.2012 21:01, Jan Kara wrote:
> >On Mon 29-10-12 00:39:46, Marcus Sundman wrote:
> >   Hello,
> >
> >>I have a big problem with the system freezing and would appreciate
> >>any help on debugging this and pinpointing where exactly the problem
> >>is, so it could be fixed.
> >>
> >>So, whenever I write to the disk the system comes to a crawl or
> >>freezes altogether. This happens even when the writing processes are
> >>running on nice '19' and ionice 'idle'. (E.g. a 10 second compile
> >>could freeze the system for several minutes, rendering the computer
> >>pretty much unusable for anything interesting.)
> >>
> >>Here you can see a 20 second gap even in superhigh priority:
> >># nice -n -20 ionice -c1 iostat -t -m -d -x 1 > http://pastebin.com/j5qnh2VV
> >>
> >>I'm currently running 3.5.0-17-lowlatency on the ZenBook UX31E,
> >>using the NOOP I/O scheduler on the SanDisk SSD U100. The chipset
> >>seems to be Intel QS67. I've had this same problem on 3.2.0 generic
> >>and lowlatency kernels.
> >   These are Ubuntu kernels. Any chance to reproduce the issue with vanilla
> >kernels - i.e. kernels without any Ubuntu patches?
> 
> I'm afraid it's going to take a week to compile a kernel with this
> freezing going on, but I suppose I could get another computer to do
> the compiling. Or should I install some pre-compiled version? If so,
> which one?
  You can install anything precompiled. It's just that I want to rule out
some Ubuntu specific patches...

> >Also when you speak of
> >system freezing - can you e.g. type to terminal while the system is frozen?
> >Or is it just that running commands freezes?
> 
> Typing usually doesn't work very well. It works for a word or two
> and then stops working for a while and if I continue to type then
> when it resumes only the last few characters appears. Typing in the
> console is a bit better than in a terminal in X (not counting the
> several minutes it can take to switch to the console (Ctrl-Alt-F1)).
  I see. 

> Also, and this might be important, according to iotop there is
> almost no disk writing going on during the freeze. (Occasionally
> there are a few MB/s, but mostly it's 0-200 kB/s.) Well, at least
> when an iotop running on nice -20 hasn't frozen completely, which it
> does during the more severe freezes.
  OK, it seems as if your machine has some problems with memory
allocations. Can you capture /proc/vmstat before the freeze and after the
freeze and send them for comparison. Maybe it will show us what is the
system doing.

  Also you can try doing:
echo never >/sys/kernel/mm/transparent_hugepage/enabled
  and see whether it changes anything.

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2012-11-07 16:17     ` Jan Kara
@ 2012-11-08 23:41       ` Marcus Sundman
  2012-11-09 13:12         ` Marcus Sundman
  0 siblings, 1 reply; 34+ messages in thread
From: Marcus Sundman @ 2012-11-08 23:41 UTC (permalink / raw)
  To: Jan Kara; +Cc: linux-kernel

On 07.11.2012 18:17, Jan Kara wrote:
> On Fri 02-11-12 04:19:24, Marcus Sundman wrote:
>> On 01.11.2012 21:01, Jan Kara wrote:
>>> On Mon 29-10-12 00:39:46, Marcus Sundman wrote:
>>>    Hello,
>>>
>>>> I have a big problem with the system freezing and would appreciate
>>>> any help on debugging this and pinpointing where exactly the problem
>>>> is, so it could be fixed.
>>>>
>>>> So, whenever I write to the disk the system comes to a crawl or
>>>> freezes altogether. This happens even when the writing processes are
>>>> running on nice '19' and ionice 'idle'. (E.g. a 10 second compile
>>>> could freeze the system for several minutes, rendering the computer
>>>> pretty much unusable for anything interesting.)
>>>>
>>>> Here you can see a 20 second gap even in superhigh priority:
>>>> # nice -n -20 ionice -c1 iostat -t -m -d -x 1 > http://pastebin.com/j5qnh2VV
>>>>
>>>> I'm currently running 3.5.0-17-lowlatency on the ZenBook UX31E,
>>>> using the NOOP I/O scheduler on the SanDisk SSD U100. The chipset
>>>> seems to be Intel QS67. I've had this same problem on 3.2.0 generic
>>>> and lowlatency kernels.
>>>    These are Ubuntu kernels. Any chance to reproduce the issue with vanilla
>>> kernels - i.e. kernels without any Ubuntu patches?
>> I'm afraid it's going to take a week to compile a kernel with this
>> freezing going on, but I suppose I could get another computer to do
>> the compiling. Or should I install some pre-compiled version? If so,
>> which one?
>    You can install anything precompiled. It's just that I want to rule out
> some Ubuntu specific patches...

OK, I tried it with a vanilla 3.6.6 -- "uname -a" says "Linux hal 
3.6.6-030606-generic #201211050512 SMP Mon Nov 5 10:12:53 UTC 2012 
x86_64 x86_64 x86_64 GNU/Linux"

>>> Also when you speak of
>>> system freezing - can you e.g. type to terminal while the system is frozen?
>>> Or is it just that running commands freezes?
>> Typing usually doesn't work very well. It works for a word or two
>> and then stops working for a while and if I continue to type then
>> when it resumes only the last few characters appears. Typing in the
>> console is a bit better than in a terminal in X (not counting the
>> several minutes it can take to switch to the console (Ctrl-Alt-F1)).
>    I see.
>
>> Also, and this might be important, according to iotop there is
>> almost no disk writing going on during the freeze. (Occasionally
>> there are a few MB/s, but mostly it's 0-200 kB/s.) Well, at least
>> when an iotop running on nice -20 hasn't frozen completely, which it
>> does during the more severe freezes.
>    OK, it seems as if your machine has some problems with memory
> allocations. Can you capture /proc/vmstat before the freeze and after the
> freeze and send them for comparison. Maybe it will show us what is the
> system doing.

t=01:06 http://sundman.iki.fi/vmstat.pre-freeze.txt
t=01:08 http://sundman.iki.fi/vmstat.during-freeze.txt
t=01:12 http://sundman.iki.fi/vmstat.post-freeze.txt

>    Also you can try doing:
> echo never >/sys/kernel/mm/transparent_hugepage/enabled
>    and see whether it changes anything.

It's already set to 'nerver'. I think I configured this in the very 
beginning when trying to do something about these freezes.

I also have these in sysctl:
vm.swappiness=1
vm.vfs_cache_pressure=50
vm.dirty_ratio = 15
vm.dirty_background_ratio = 8

And /sys/block/sda/device/queue_depth is 1.


Regards,
Marcus


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2012-11-08 23:41       ` Marcus Sundman
@ 2012-11-09 13:12         ` Marcus Sundman
  2012-11-13 13:51           ` Jan Kara
  0 siblings, 1 reply; 34+ messages in thread
From: Marcus Sundman @ 2012-11-09 13:12 UTC (permalink / raw)
  To: Jan Kara; +Cc: linux-kernel

On 09.11.2012 01:41, Marcus Sundman wrote:
> On 07.11.2012 18:17, Jan Kara wrote:
>> On Fri 02-11-12 04:19:24, Marcus Sundman wrote:
>>> Also, and this might be important, according to iotop there is
>>> almost no disk writing going on during the freeze. (Occasionally
>>> there are a few MB/s, but mostly it's 0-200 kB/s.) Well, at least
>>> when an iotop running on nice -20 hasn't frozen completely, which it
>>> does during the more severe freezes.
>>    OK, it seems as if your machine has some problems with memory
>> allocations. Can you capture /proc/vmstat before the freeze and after 
>> the
>> freeze and send them for comparison. Maybe it will show us what is the
>> system doing.
>
> t=01:06 http://sundman.iki.fi/vmstat.pre-freeze.txt
> t=01:08 http://sundman.iki.fi/vmstat.during-freeze.txt
> t=01:12 http://sundman.iki.fi/vmstat.post-freeze.txt

Here are some more vmstats:
http://sundman.iki.fi/vmstats.tar.gz

They are from running this:
while true; do cat /proc/vmstat > "vmstat.$(date +%FT%X).txt"; sleep 10; 
done

There were lots and lots of freezes for almost 20 mins from 14:37:45 
onwards, pretty much constantly, but at 14:56:50 the freezes suddenly 
stopped and everything went back to how it should be.


Thanks,
Marcus


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2012-11-09 13:12         ` Marcus Sundman
@ 2012-11-13 13:51           ` Jan Kara
  2012-11-16  1:11             ` Marcus Sundman
  0 siblings, 1 reply; 34+ messages in thread
From: Jan Kara @ 2012-11-13 13:51 UTC (permalink / raw)
  To: Marcus Sundman; +Cc: Jan Kara, linux-kernel

On Fri 09-11-12 15:12:43, Marcus Sundman wrote:
> On 09.11.2012 01:41, Marcus Sundman wrote:
> >On 07.11.2012 18:17, Jan Kara wrote:
> >>On Fri 02-11-12 04:19:24, Marcus Sundman wrote:
> >>>Also, and this might be important, according to iotop there is
> >>>almost no disk writing going on during the freeze. (Occasionally
> >>>there are a few MB/s, but mostly it's 0-200 kB/s.) Well, at least
> >>>when an iotop running on nice -20 hasn't frozen completely, which it
> >>>does during the more severe freezes.
> >>   OK, it seems as if your machine has some problems with memory
> >>allocations. Can you capture /proc/vmstat before the freeze and
> >>after the
> >>freeze and send them for comparison. Maybe it will show us what is the
> >>system doing.
> >
> >t=01:06 http://sundman.iki.fi/vmstat.pre-freeze.txt
> >t=01:08 http://sundman.iki.fi/vmstat.during-freeze.txt
> >t=01:12 http://sundman.iki.fi/vmstat.post-freeze.txt
> 
> Here are some more vmstats:
> http://sundman.iki.fi/vmstats.tar.gz
> 
> They are from running this:
> while true; do cat /proc/vmstat > "vmstat.$(date +%FT%X).txt"; sleep
> 10; done
> 
> There were lots and lots of freezes for almost 20 mins from 14:37:45
> onwards, pretty much constantly, but at 14:56:50 the freezes
> suddenly stopped and everything went back to how it should be.
  I was looking into the data but they didn't show anything problematic.
The machine seems to be writing a lot but there's always some free memory,
even direct reclaim isn't ever entered. Hum, actually you wrote iotop isn't
showing much IO going on but vmstats show there is about 1 GB written
during the freeze. It is not a huge amount given the time span but it
certainly gives a few MB/s of write load.

There's surprisingly high number of allocations going on but that may be
due to the IO activity. So let's try something else: Can you switch to
console and when the hang happens press Alt-Sysrq-w (or you can just do
"echo w >/proc/sysrq-trigger" if the machine is live enough to do that).
Then send me the output from dmesg.  Thanks!

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2012-11-13 13:51           ` Jan Kara
@ 2012-11-16  1:11             ` Marcus Sundman
  2012-11-21 23:30               ` Jan Kara
  0 siblings, 1 reply; 34+ messages in thread
From: Marcus Sundman @ 2012-11-16  1:11 UTC (permalink / raw)
  To: Jan Kara; +Cc: linux-kernel

On 13.11.2012 15:51, Jan Kara wrote:
> On Fri 09-11-12 15:12:43, Marcus Sundman wrote:
>> On 09.11.2012 01:41, Marcus Sundman wrote:
>>> On 07.11.2012 18:17, Jan Kara wrote:
>>>> On Fri 02-11-12 04:19:24, Marcus Sundman wrote:
>>>>> Also, and this might be important, according to iotop there is
>>>>> almost no disk writing going on during the freeze. (Occasionally
>>>>> there are a few MB/s, but mostly it's 0-200 kB/s.) Well, at least
>>>>> when an iotop running on nice -20 hasn't frozen completely, which it
>>>>> does during the more severe freezes.
>>>>    OK, it seems as if your machine has some problems with memory
>>>> allocations. Can you capture /proc/vmstat before the freeze and
>>>> after the
>>>> freeze and send them for comparison. Maybe it will show us what is the
>>>> system doing.
>>> t=01:06 http://sundman.iki.fi/vmstat.pre-freeze.txt
>>> t=01:08 http://sundman.iki.fi/vmstat.during-freeze.txt
>>> t=01:12 http://sundman.iki.fi/vmstat.post-freeze.txt
>> Here are some more vmstats:
>> http://sundman.iki.fi/vmstats.tar.gz
>>
>> They are from running this:
>> while true; do cat /proc/vmstat > "vmstat.$(date +%FT%X).txt"; sleep
>> 10; done
>>
>> There were lots and lots of freezes for almost 20 mins from 14:37:45
>> onwards, pretty much constantly, but at 14:56:50 the freezes
>> suddenly stopped and everything went back to how it should be.
>    I was looking into the data but they didn't show anything problematic.
> The machine seems to be writing a lot but there's always some free memory,
> even direct reclaim isn't ever entered. Hum, actually you wrote iotop isn't
> showing much IO going on but vmstats show there is about 1 GB written
> during the freeze. It is not a huge amount given the time span but it
> certainly gives a few MB/s of write load.

I didn't watch iotop during this particular freeze. I'll try to keep an 
eye on iotop in the future. Is there some particular options I should 
run iotop with, or is a "nice -n -20 iotop -od3" fine?

> There's surprisingly high number of allocations going on but that may be
> due to the IO activity. So let's try something else: Can you switch to
> console and when the hang happens press Alt-Sysrq-w (or you can just do
> "echo w >/proc/sysrq-trigger" if the machine is live enough to do that).
> Then send me the output from dmesg.  Thanks!

Sure! Here are two:
http://sundman.iki.fi/dmesg-1.txt
http://sundman.iki.fi/dmesg-2.txt


Best regards,
Marcus


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2012-11-16  1:11             ` Marcus Sundman
@ 2012-11-21 23:30               ` Jan Kara
  2012-11-27 16:14                 ` Marcus Sundman
  0 siblings, 1 reply; 34+ messages in thread
From: Jan Kara @ 2012-11-21 23:30 UTC (permalink / raw)
  To: Marcus Sundman; +Cc: Jan Kara, linux-kernel

On Fri 16-11-12 03:11:22, Marcus Sundman wrote:
> On 13.11.2012 15:51, Jan Kara wrote:
> >On Fri 09-11-12 15:12:43, Marcus Sundman wrote:
> >>On 09.11.2012 01:41, Marcus Sundman wrote:
> >>>On 07.11.2012 18:17, Jan Kara wrote:
> >>>>On Fri 02-11-12 04:19:24, Marcus Sundman wrote:
> >>>>>Also, and this might be important, according to iotop there is
> >>>>>almost no disk writing going on during the freeze. (Occasionally
> >>>>>there are a few MB/s, but mostly it's 0-200 kB/s.) Well, at least
> >>>>>when an iotop running on nice -20 hasn't frozen completely, which it
> >>>>>does during the more severe freezes.
> >>>>   OK, it seems as if your machine has some problems with memory
> >>>>allocations. Can you capture /proc/vmstat before the freeze and
> >>>>after the
> >>>>freeze and send them for comparison. Maybe it will show us what is the
> >>>>system doing.
> >>>t=01:06 http://sundman.iki.fi/vmstat.pre-freeze.txt
> >>>t=01:08 http://sundman.iki.fi/vmstat.during-freeze.txt
> >>>t=01:12 http://sundman.iki.fi/vmstat.post-freeze.txt
> >>Here are some more vmstats:
> >>http://sundman.iki.fi/vmstats.tar.gz
> >>
> >>They are from running this:
> >>while true; do cat /proc/vmstat > "vmstat.$(date +%FT%X).txt"; sleep
> >>10; done
> >>
> >>There were lots and lots of freezes for almost 20 mins from 14:37:45
> >>onwards, pretty much constantly, but at 14:56:50 the freezes
> >>suddenly stopped and everything went back to how it should be.
> >   I was looking into the data but they didn't show anything problematic.
> >The machine seems to be writing a lot but there's always some free memory,
> >even direct reclaim isn't ever entered. Hum, actually you wrote iotop isn't
> >showing much IO going on but vmstats show there is about 1 GB written
> >during the freeze. It is not a huge amount given the time span but it
> >certainly gives a few MB/s of write load.
> 
> I didn't watch iotop during this particular freeze. I'll try to keep
> an eye on iotop in the future. Is there some particular options I
> should run iotop with, or is a "nice -n -20 iotop -od3" fine?
  I'm not really familiar with iotop :). Usually I use iostat...

> >There's surprisingly high number of allocations going on but that may be
> >due to the IO activity. So let's try something else: Can you switch to
> >console and when the hang happens press Alt-Sysrq-w (or you can just do
> >"echo w >/proc/sysrq-trigger" if the machine is live enough to do that).
> >Then send me the output from dmesg.  Thanks!
> 
> Sure! Here are two:
> http://sundman.iki.fi/dmesg-1.txt
> http://sundman.iki.fi/dmesg-2.txt
  Thanks for those and sorry for the delay (I was busy with other stuff).
I had a look into those traces and I have to say I'm not much wiser. In the
first dump there is just kswapd waiting for IO. In the second dump there
are more processes waiting for IO (mostly for reads - nautilus,
thunderbird, opera, ...) but nothing really surprising. So I'm lost what
could cause the hangs you observe. Recalling you wrote even simple programs
like top hang, maybe it is some CPU scheduling issue? Can you boot with
noautogroup kernel option?

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2012-11-21 23:30               ` Jan Kara
@ 2012-11-27 16:14                 ` Marcus Sundman
  2012-12-05 15:32                   ` Jan Kara
  0 siblings, 1 reply; 34+ messages in thread
From: Marcus Sundman @ 2012-11-27 16:14 UTC (permalink / raw)
  To: Jan Kara; +Cc: linux-kernel

On 22.11.2012 01:30, Jan Kara wrote:
> On Fri 16-11-12 03:11:22, Marcus Sundman wrote:
>> On 13.11.2012 15:51, Jan Kara wrote:
>>> On Fri 09-11-12 15:12:43, Marcus Sundman wrote:
>>>> On 09.11.2012 01:41, Marcus Sundman wrote:
>>>>> On 07.11.2012 18:17, Jan Kara wrote:
>>>>>> On Fri 02-11-12 04:19:24, Marcus Sundman wrote:
>>>>>>> Also, and this might be important, according to iotop there is
>>>>>>> almost no disk writing going on during the freeze. (Occasionally
>>>>>>> there are a few MB/s, but mostly it's 0-200 kB/s.) Well, at least
>>>>>>> when an iotop running on nice -20 hasn't frozen completely, which it
>>>>>>> does during the more severe freezes.
>>>>>>    OK, it seems as if your machine has some problems with memory
>>>>>> allocations. Can you capture /proc/vmstat before the freeze and
>>>>>> after the
>>>>>> freeze and send them for comparison. Maybe it will show us what is the
>>>>>> system doing.
>>>>> t=01:06 http://sundman.iki.fi/vmstat.pre-freeze.txt
>>>>> t=01:08 http://sundman.iki.fi/vmstat.during-freeze.txt
>>>>> t=01:12 http://sundman.iki.fi/vmstat.post-freeze.txt
>>>> Here are some more vmstats:
>>>> http://sundman.iki.fi/vmstats.tar.gz
>>>>
>>>> They are from running this:
>>>> while true; do cat /proc/vmstat > "vmstat.$(date +%FT%X).txt"; sleep
>>>> 10; done
>>>>
>>>> There were lots and lots of freezes for almost 20 mins from 14:37:45
>>>> onwards, pretty much constantly, but at 14:56:50 the freezes
>>>> suddenly stopped and everything went back to how it should be.
>>>    I was looking into the data but they didn't show anything problematic.
>>> The machine seems to be writing a lot but there's always some free memory,
>>> even direct reclaim isn't ever entered. Hum, actually you wrote iotop isn't
>>> showing much IO going on but vmstats show there is about 1 GB written
>>> during the freeze. It is not a huge amount given the time span but it
>>> certainly gives a few MB/s of write load.
>> I didn't watch iotop during this particular freeze. I'll try to keep
>> an eye on iotop in the future. Is there some particular options I
>> should run iotop with, or is a "nice -n -20 iotop -od3" fine?
>    I'm not really familiar with iotop :). Usually I use iostat...

OK, which options for iostat should I use then? :)

>>> There's surprisingly high number of allocations going on but that may be
>>> due to the IO activity. So let's try something else: Can you switch to
>>> console and when the hang happens press Alt-Sysrq-w (or you can just do
>>> "echo w >/proc/sysrq-trigger" if the machine is live enough to do that).
>>> Then send me the output from dmesg.  Thanks!
>> Sure! Here are two:
>> http://sundman.iki.fi/dmesg-1.txt
>> http://sundman.iki.fi/dmesg-2.txt
>    Thanks for those and sorry for the delay (I was busy with other stuff).
> I had a look into those traces and I have to say I'm not much wiser. In the
> first dump there is just kswapd waiting for IO. In the second dump there
> are more processes waiting for IO (mostly for reads - nautilus,
> thunderbird, opera, ...) but nothing really surprising. So I'm lost what
> could cause the hangs you observe.

Yes, mostly it's difficult to trigger the sysrq thingy, because by the 
time I manage to switch to the console or running that echo to proc in a 
terminal the worst is already over.

> Recalling you wrote even simple programs
> like top hang, maybe it is some CPU scheduling issue? Can you boot with
> noautogroup kernel option?

Sure. I've been running with noautogroup for almost a week now, but no 
big change one way or the other. (E.g., it's still impossible to listen 
to music, because the songs will start skipping/looping several times 
during each song even if there isn't any big "hang" happening. And 
uncompressing a 100 MB archive (with nice '19' and ionice 'idle') is 
still, after a while, followed by a couple of minutes of superhigh I/O 
wait causing everything to become really slow.)


- Marcus


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2012-11-27 16:14                 ` Marcus Sundman
@ 2012-12-05 15:32                   ` Jan Kara
  2013-02-20  8:42                     ` Marcus Sundman
  0 siblings, 1 reply; 34+ messages in thread
From: Jan Kara @ 2012-12-05 15:32 UTC (permalink / raw)
  To: Marcus Sundman; +Cc: Jan Kara, linux-kernel

On Tue 27-11-12 18:14:42, Marcus Sundman wrote:
> On 22.11.2012 01:30, Jan Kara wrote:
> >On Fri 16-11-12 03:11:22, Marcus Sundman wrote:
> >>On 13.11.2012 15:51, Jan Kara wrote:
> >>>On Fri 09-11-12 15:12:43, Marcus Sundman wrote:
> >>>>On 09.11.2012 01:41, Marcus Sundman wrote:
> >>>>>On 07.11.2012 18:17, Jan Kara wrote:
> >>>>>>On Fri 02-11-12 04:19:24, Marcus Sundman wrote:
> >>>>>>>Also, and this might be important, according to iotop there is
> >>>>>>>almost no disk writing going on during the freeze. (Occasionally
> >>>>>>>there are a few MB/s, but mostly it's 0-200 kB/s.) Well, at least
> >>>>>>>when an iotop running on nice -20 hasn't frozen completely, which it
> >>>>>>>does during the more severe freezes.
> >>>>>>   OK, it seems as if your machine has some problems with memory
> >>>>>>allocations. Can you capture /proc/vmstat before the freeze and
> >>>>>>after the
> >>>>>>freeze and send them for comparison. Maybe it will show us what is the
> >>>>>>system doing.
> >>>>>t=01:06 http://sundman.iki.fi/vmstat.pre-freeze.txt
> >>>>>t=01:08 http://sundman.iki.fi/vmstat.during-freeze.txt
> >>>>>t=01:12 http://sundman.iki.fi/vmstat.post-freeze.txt
> >>>>Here are some more vmstats:
> >>>>http://sundman.iki.fi/vmstats.tar.gz
> >>>>
> >>>>They are from running this:
> >>>>while true; do cat /proc/vmstat > "vmstat.$(date +%FT%X).txt"; sleep
> >>>>10; done
> >>>>
> >>>>There were lots and lots of freezes for almost 20 mins from 14:37:45
> >>>>onwards, pretty much constantly, but at 14:56:50 the freezes
> >>>>suddenly stopped and everything went back to how it should be.
> >>>   I was looking into the data but they didn't show anything problematic.
> >>>The machine seems to be writing a lot but there's always some free memory,
> >>>even direct reclaim isn't ever entered. Hum, actually you wrote iotop isn't
> >>>showing much IO going on but vmstats show there is about 1 GB written
> >>>during the freeze. It is not a huge amount given the time span but it
> >>>certainly gives a few MB/s of write load.
> >>I didn't watch iotop during this particular freeze. I'll try to keep
> >>an eye on iotop in the future. Is there some particular options I
> >>should run iotop with, or is a "nice -n -20 iotop -od3" fine?
> >   I'm not really familiar with iotop :). Usually I use iostat...
> 
> OK, which options for iostat should I use then? :)
  I'm back from vacation. Sorry for the delay. You can use
iostat -x 1

> >>>There's surprisingly high number of allocations going on but that may be
> >>>due to the IO activity. So let's try something else: Can you switch to
> >>>console and when the hang happens press Alt-Sysrq-w (or you can just do
> >>>"echo w >/proc/sysrq-trigger" if the machine is live enough to do that).
> >>>Then send me the output from dmesg.  Thanks!
> >>Sure! Here are two:
> >>http://sundman.iki.fi/dmesg-1.txt
> >>http://sundman.iki.fi/dmesg-2.txt
> >   Thanks for those and sorry for the delay (I was busy with other stuff).
> >I had a look into those traces and I have to say I'm not much wiser. In the
> >first dump there is just kswapd waiting for IO. In the second dump there
> >are more processes waiting for IO (mostly for reads - nautilus,
> >thunderbird, opera, ...) but nothing really surprising. So I'm lost what
> >could cause the hangs you observe.
> 
> Yes, mostly it's difficult to trigger the sysrq thingy, because by
> the time I manage to switch to the console or running that echo to
> proc in a terminal the worst is already over.
  I see. Maybe you could have something like
while true; do echo w >/proc/sysrq-trigger; sleep 10; done
  running in the background?
  
> >Recalling you wrote even simple programs
> >like top hang, maybe it is some CPU scheduling issue? Can you boot with
> >noautogroup kernel option?
> 
> Sure. I've been running with noautogroup for almost a week now, but
> no big change one way or the other. (E.g., it's still impossible to
> listen to music, because the songs will start skipping/looping
> several times during each song even if there isn't any big "hang"
> happening. And uncompressing a 100 MB archive (with nice '19' and
> ionice 'idle') is still, after a while, followed by a couple of
> minutes of superhigh I/O wait causing everything to become really
> slow.)
  Hum, I'm starting to wander what's so special about your system that you
see these hangs while noone else seems to be hitting them. Your kernel is a
standard one from Ubuntu so tons of people run it. Your HW doesn't seem to
be too special either.

BTW the fact that you ionice 'tar' doesn't change anything because all the
writes are done in the context of kernel flusher thread (tar just writes
data into cache). But still it shouldn't lock the machine up. What might be
interesting test though is running:
  dd if=/dev/zero of=file bs=1M count=200 oflags=direct

Does this trigger any hangs?

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2012-12-05 15:32                   ` Jan Kara
@ 2013-02-20  8:42                     ` Marcus Sundman
  2013-02-20 11:40                       ` Marcus Sundman
  0 siblings, 1 reply; 34+ messages in thread
From: Marcus Sundman @ 2013-02-20  8:42 UTC (permalink / raw)
  To: Jan Kara; +Cc: linux-kernel

On 05.12.2012 17:32, Jan Kara wrote:
> On Tue 27-11-12 18:14:42, Marcus Sundman wrote:
>> On 22.11.2012 01:30, Jan Kara wrote:
>>> On Fri 16-11-12 03:11:22, Marcus Sundman wrote:
>>>> On 13.11.2012 15:51, Jan Kara wrote:
>>>>> On Fri 09-11-12 15:12:43, Marcus Sundman wrote:
>>>>>> On 09.11.2012 01:41, Marcus Sundman wrote:
>>>>>>> On 07.11.2012 18:17, Jan Kara wrote:
>>>>>>>> On Fri 02-11-12 04:19:24, Marcus Sundman wrote:
>>>>>>>>> Also, and this might be important, according to iotop there is
>>>>>>>>> almost no disk writing going on during the freeze. (Occasionally
>>>>>>>>> there are a few MB/s, but mostly it's 0-200 kB/s.) Well, at least
>>>>>>>>> when an iotop running on nice -20 hasn't frozen completely, which it
>>>>>>>>> does during the more severe freezes.
>>>>>>>>    OK, it seems as if your machine has some problems with memory
>>>>>>>> allocations. Can you capture /proc/vmstat before the freeze and
>>>>>>>> after the
>>>>>>>> freeze and send them for comparison. Maybe it will show us what is the
>>>>>>>> system doing.
>>>>>>> t=01:06 http://sundman.iki.fi/vmstat.pre-freeze.txt
>>>>>>> t=01:08 http://sundman.iki.fi/vmstat.during-freeze.txt
>>>>>>> t=01:12 http://sundman.iki.fi/vmstat.post-freeze.txt
>>>>>> Here are some more vmstats:
>>>>>> http://sundman.iki.fi/vmstats.tar.gz
>>>>>>
>>>>>> They are from running this:
>>>>>> while true; do cat /proc/vmstat > "vmstat.$(date +%FT%X).txt"; sleep
>>>>>> 10; done
>>>>>>
>>>>>> There were lots and lots of freezes for almost 20 mins from 14:37:45
>>>>>> onwards, pretty much constantly, but at 14:56:50 the freezes
>>>>>> suddenly stopped and everything went back to how it should be.
>>>>>    I was looking into the data but they didn't show anything problematic.
>>>>> The machine seems to be writing a lot but there's always some free memory,
>>>>> even direct reclaim isn't ever entered. Hum, actually you wrote iotop isn't
>>>>> showing much IO going on but vmstats show there is about 1 GB written
>>>>> during the freeze. It is not a huge amount given the time span but it
>>>>> certainly gives a few MB/s of write load.
>>>> I didn't watch iotop during this particular freeze. I'll try to keep
>>>> an eye on iotop in the future. Is there some particular options I
>>>> should run iotop with, or is a "nice -n -20 iotop -od3" fine?
>>>    I'm not really familiar with iotop :). Usually I use iostat...
>> OK, which options for iostat should I use then? :)
>    I'm back from vacation. Sorry for the delay. You can use
> iostat -x 1

Just when you got back I started my pre-vacation work stress and am now 
ending my post-vacation work-stress.. :)

That iostat -x 1 shows %util as 100 and w_await at 2,000 - 70,000.. like so:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
            9.05    0.00    1.51   66.33    0.00   23.12
Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s 
avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00    1.00     0.00   184.00 
368.00   137.08 62199.00    0.00 62199.00 1000.00 100.00


>>>>> There's surprisingly high number of allocations going on but that may be
>>>>> due to the IO activity. So let's try something else: Can you switch to
>>>>> console and when the hang happens press Alt-Sysrq-w (or you can just do
>>>>> "echo w >/proc/sysrq-trigger" if the machine is live enough to do that).
>>>>> Then send me the output from dmesg.  Thanks!
>>>> Sure! Here are two:
>>>> http://sundman.iki.fi/dmesg-1.txt
>>>> http://sundman.iki.fi/dmesg-2.txt
>>>    Thanks for those and sorry for the delay (I was busy with other stuff).
>>> I had a look into those traces and I have to say I'm not much wiser. In the
>>> first dump there is just kswapd waiting for IO. In the second dump there
>>> are more processes waiting for IO (mostly for reads - nautilus,
>>> thunderbird, opera, ...) but nothing really surprising. So I'm lost what
>>> could cause the hangs you observe.
>> Yes, mostly it's difficult to trigger the sysrq thingy, because by
>> the time I manage to switch to the console or running that echo to
>> proc in a terminal the worst is already over.
>    I see. Maybe you could have something like
> while true; do echo w >/proc/sysrq-trigger; sleep 10; done
>    running in the background?

Sure, but I suspect it'll take until the worst is over before it manages 
to load and execute that "echo w".

>>> Recalling you wrote even simple programs
>>> like top hang, maybe it is some CPU scheduling issue? Can you boot with
>>> noautogroup kernel option?
>> Sure. I've been running with noautogroup for almost a week now, but
>> no big change one way or the other. (E.g., it's still impossible to
>> listen to music, because the songs will start skipping/looping
>> several times during each song even if there isn't any big "hang"
>> happening. And uncompressing a 100 MB archive (with nice '19' and
>> ionice 'idle') is still, after a while, followed by a couple of
>> minutes of superhigh I/O wait causing everything to become really
>> slow.)
>    Hum, I'm starting to wander what's so special about your system that you
> see these hangs while noone else seems to be hitting them. Your kernel is a
> standard one from Ubuntu so tons of people run it. Your HW doesn't seem to
> be too special either.
>
> BTW the fact that you ionice 'tar' doesn't change anything because all the
> writes are done in the context of kernel flusher thread (tar just writes
> data into cache). But still it shouldn't lock the machine up. What might be
> interesting test though is running:
>    dd if=/dev/zero of=file bs=1M count=200 oflags=direct
>
> Does this trigger any hangs?

Yes, sure. If I run nothing else then it's not so severe, but the system 
is still quite unusable during the time it runs that dd.

Also, the speeds are closer to an Amiga500-era floppy drive than to an 
SSD from 2012 which this is:

$ dd if=/dev/zero of=iotest-file bs=1M count=200 oflag=direct
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 171.701 s, 1.2 MB/s
$


Regards,
Marcus


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-02-20  8:42                     ` Marcus Sundman
@ 2013-02-20 11:40                       ` Marcus Sundman
  2013-02-22 20:51                         ` Jan Kara
  0 siblings, 1 reply; 34+ messages in thread
From: Marcus Sundman @ 2013-02-20 11:40 UTC (permalink / raw)
  To: Jan Kara; +Cc: linux-kernel

On 20.02.2013 10:42, Marcus Sundman wrote:
> On 05.12.2012 17:32, Jan Kara wrote:
>>    I see. Maybe you could have something like
>> while true; do echo w >/proc/sysrq-trigger; sleep 10; done
>>    running in the background?
>
> Sure, but I suspect it'll take until the worst is over before it 
> manages to load and execute that "echo w".

Here is a big run of sysrq-triggering, all while I was uncompressing a 
big rar file causing the whole system to be utterly unusable.
NB: Even with realtime I/O-priority the sysrq couldn't be triggered 
between 12:41:54 and 12:42:49, as you can see from the dmesg-3.txt file.

$ sudo ionice -c 1 su
# ionice
realtime: prio 4
# while true; do sleep 10; echo w >/proc/sysrq-trigger; done
^C
# tail -n +1700 /var/log/syslog >dmesg-3.txt

http://sundman.iki.fi/dmesg-3.txt


Regards,
Marcus

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-02-20 11:40                       ` Marcus Sundman
@ 2013-02-22 20:51                         ` Jan Kara
  2013-02-22 23:27                           ` Marcus Sundman
  0 siblings, 1 reply; 34+ messages in thread
From: Jan Kara @ 2013-02-22 20:51 UTC (permalink / raw)
  To: Marcus Sundman; +Cc: Jan Kara, linux-kernel

On Wed 20-02-13 13:40:03, Marcus Sundman wrote:
> On 20.02.2013 10:42, Marcus Sundman wrote:
> >On 05.12.2012 17:32, Jan Kara wrote:
> >>   I see. Maybe you could have something like
> >>while true; do echo w >/proc/sysrq-trigger; sleep 10; done
> >>   running in the background?
> >
> >Sure, but I suspect it'll take until the worst is over before it
> >manages to load and execute that "echo w".
> 
> Here is a big run of sysrq-triggering, all while I was uncompressing
> a big rar file causing the whole system to be utterly unusable.
> NB: Even with realtime I/O-priority the sysrq couldn't be triggered
> between 12:41:54 and 12:42:49, as you can see from the dmesg-3.txt
> file.
> 
> $ sudo ionice -c 1 su
> # ionice
> realtime: prio 4
> # while true; do sleep 10; echo w >/proc/sysrq-trigger; done
> ^C
> # tail -n +1700 /var/log/syslog >dmesg-3.txt
> 
> http://sundman.iki.fi/dmesg-3.txt
  Thanks for the traces. I was looking at them and we seem to be always
waiting for IO. There don't seem to be that much CPU load either.

I'm actually starting to suspect the SSD in your laptop. The svctm field
from iostat output shows it takes 1 second on average to complete an IO
request. That is awfully slow given one request has ~180 KB of data on
average. Ah, one more idea - can you post your /proc/mounts please?

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-02-22 20:51                         ` Jan Kara
@ 2013-02-22 23:27                           ` Marcus Sundman
  2013-02-24  0:12                             ` Dave Chinner
  2013-02-25 13:05                             ` Jan Kara
  0 siblings, 2 replies; 34+ messages in thread
From: Marcus Sundman @ 2013-02-22 23:27 UTC (permalink / raw)
  To: Jan Kara; +Cc: linux-kernel

On 22.02.2013 22:51, Jan Kara wrote:
> On Wed 20-02-13 13:40:03, Marcus Sundman wrote:
>> On 20.02.2013 10:42, Marcus Sundman wrote:
>>> On 05.12.2012 17:32, Jan Kara wrote:
>>>>    I see. Maybe you could have something like
>>>> while true; do echo w >/proc/sysrq-trigger; sleep 10; done
>>>>    running in the background?
>>> Sure, but I suspect it'll take until the worst is over before it
>>> manages to load and execute that "echo w".
>> Here is a big run of sysrq-triggering, all while I was uncompressing
>> a big rar file causing the whole system to be utterly unusable.
>> NB: Even with realtime I/O-priority the sysrq couldn't be triggered
>> between 12:41:54 and 12:42:49, as you can see from the dmesg-3.txt
>> file.
>>
>> $ sudo ionice -c 1 su
>> # ionice
>> realtime: prio 4
>> # while true; do sleep 10; echo w >/proc/sysrq-trigger; done
>> ^C
>> # tail -n +1700 /var/log/syslog >dmesg-3.txt
>>
>> http://sundman.iki.fi/dmesg-3.txt
>    Thanks for the traces. I was looking at them and we seem to be always
> waiting for IO. There don't seem to be that much CPU load either.
>
> I'm actually starting to suspect the SSD in your laptop.

I've suspected the driver, because I don't remember it being slow in 
windows before I wiped it and installed ubuntu on it. (I didn't do very 
much with it in windows, though. I just downloaded the ubuntu image, but 
AFAICR it had no problem saving the image at the speed of my internet 
connection (which is normally around 6 MiB/s), but nowadays I have to 
strype my downloads to less than 1 MiB/s for it to not lock up so much.)

> The svctm field
> from iostat output shows it takes 1 second on average to complete an IO
> request. That is awfully slow given one request has ~180 KB of data on
> average. Ah, one more idea - can you post your /proc/mounts please?

Sure:
> $ cat /proc/mounts
> rootfs / rootfs rw 0 0
> sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
> proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
> udev /dev devtmpfs rw,relatime,size=1964816k,nr_inodes=491204,mode=755 0 0
> devpts /dev/pts devpts 
> rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
> tmpfs /run tmpfs rw,nosuid,relatime,size=789652k,mode=755 0 0
> /dev/disk/by-uuid/5bfa7a58-2d35-4758-954e-4deafb09b892 / ext4 
> rw,noatime,discard,errors=remount-ro 0 0
> none /sys/fs/fuse/connections fusectl rw,relatime 0 0
> none /sys/kernel/debug debugfs rw,relatime 0 0
> none /sys/kernel/security securityfs rw,relatime 0 0
> none /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
> none /run/shm tmpfs rw,nosuid,nodev,relatime 0 0
> none /run/user tmpfs 
> rw,nosuid,nodev,noexec,relatime,size=102400k,mode=755 0 0
> /dev/sda6 /home ext4 rw,noatime,discard 0 0
> binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc 
> rw,nosuid,nodev,noexec,relatime 0 0
> gvfsd-fuse /run/user/marcus/gvfs fuse.gvfsd-fuse 
> rw,nosuid,nodev,relatime,user_id=1000,group_id=100 0 0

Both / and /home are on the same SSD and suffer from the same problem. 
(I think the swap does as well, but I have my swappiness set very low to 
minimize swapping.)

Regards,
Marcus


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-02-22 23:27                           ` Marcus Sundman
@ 2013-02-24  0:12                             ` Dave Chinner
  2013-02-24  1:20                               ` Theodore Ts'o
  2013-02-25 13:05                             ` Jan Kara
  1 sibling, 1 reply; 34+ messages in thread
From: Dave Chinner @ 2013-02-24  0:12 UTC (permalink / raw)
  To: Marcus Sundman; +Cc: Jan Kara, linux-kernel

On Sat, Feb 23, 2013 at 01:27:38AM +0200, Marcus Sundman wrote:
> >$ cat /proc/mounts
> >rootfs / rootfs rw 0 0
> >sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
> >proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
> >udev /dev devtmpfs rw,relatime,size=1964816k,nr_inodes=491204,mode=755 0 0
> >devpts /dev/pts devpts
> >rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
> >tmpfs /run tmpfs rw,nosuid,relatime,size=789652k,mode=755 0 0
> >/dev/disk/by-uuid/5bfa7a58-2d35-4758-954e-4deafb09b892 / ext4
> >rw,noatime,discard,errors=remount-ro 0 0
              ^^^^^^^

> >none /sys/fs/fuse/connections fusectl rw,relatime 0 0
> >none /sys/kernel/debug debugfs rw,relatime 0 0
> >none /sys/kernel/security securityfs rw,relatime 0 0
> >none /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
> >none /run/shm tmpfs rw,nosuid,nodev,relatime 0 0
> >none /run/user tmpfs
> >rw,nosuid,nodev,noexec,relatime,size=102400k,mode=755 0 0
> >/dev/sda6 /home ext4 rw,noatime,discard 0 0
                                   ^^^^^^^

I'd say that's your problem....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-02-24  0:12                             ` Dave Chinner
@ 2013-02-24  1:20                               ` Theodore Ts'o
  2013-02-26 18:41                                 ` Marcus Sundman
  0 siblings, 1 reply; 34+ messages in thread
From: Theodore Ts'o @ 2013-02-24  1:20 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Marcus Sundman, Jan Kara, linux-kernel

On Sun, Feb 24, 2013 at 11:12:22AM +1100, Dave Chinner wrote:
> > >/dev/sda6 /home ext4 rw,noatime,discard 0 0
>                                    ^^^^^^^
> I'd say that's your problem....

Looks like the Sandisk U100 is a good SSD for me to put on my personal
"avoid" list:

http://thessdreview.com/our-reviews/asus-zenbook-ssd-review-not-necessarily-sandforce-driven-shows-significant-speed-bump/

There are a number of SSD's which do not implement "trim" efficiently,
so these days, the recommended way to use trim is to run the "fstrim"
command out of crontab.

There are some high performance flash devices (especially PCIe
attached flash devices, where the TRIM command doesn't necessarily
mean waiting for the entire contents of the Native Command Queue to
drain) where using the discard mount option makes sense for best
performance, but for most SATA drives (especially the really
cheap-sh*t ones), I don't recommend it.  If it weren't for the fact
that for these devices exist and the discard option is especially
useful for them, I probably would have removed the discard option from
ext4.

						- Ted

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-02-22 23:27                           ` Marcus Sundman
  2013-02-24  0:12                             ` Dave Chinner
@ 2013-02-25 13:05                             ` Jan Kara
  1 sibling, 0 replies; 34+ messages in thread
From: Jan Kara @ 2013-02-25 13:05 UTC (permalink / raw)
  To: Marcus Sundman; +Cc: Jan Kara, linux-kernel

On Sat 23-02-13 01:27:38, Marcus Sundman wrote:
> Sure:
> >$ cat /proc/mounts
> >rootfs / rootfs rw 0 0
> >sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
> >proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
> >udev /dev devtmpfs rw,relatime,size=1964816k,nr_inodes=491204,mode=755 0 0
> >devpts /dev/pts devpts
> >rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
> >tmpfs /run tmpfs rw,nosuid,relatime,size=789652k,mode=755 0 0
> >/dev/disk/by-uuid/5bfa7a58-2d35-4758-954e-4deafb09b892 / ext4
> >rw,noatime,discard,errors=remount-ro 0 0
> >none /sys/fs/fuse/connections fusectl rw,relatime 0 0
> >none /sys/kernel/debug debugfs rw,relatime 0 0
> >none /sys/kernel/security securityfs rw,relatime 0 0
> >none /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
> >none /run/shm tmpfs rw,nosuid,nodev,relatime 0 0
> >none /run/user tmpfs
> >rw,nosuid,nodev,noexec,relatime,size=102400k,mode=755 0 0
> >/dev/sda6 /home ext4 rw,noatime,discard 0 0
> >binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc
> >rw,nosuid,nodev,noexec,relatime 0 0
> >gvfsd-fuse /run/user/marcus/gvfs fuse.gvfsd-fuse
> >rw,nosuid,nodev,relatime,user_id=1000,group_id=100 0 0
> 
> Both / and /home are on the same SSD and suffer from the same
> problem. (I think the swap does as well, but I have my swappiness
> set very low to minimize swapping.)
  Yeah, my suspicion is confirmed. Try removing the 'discard' option. See
Ted's email for more details.

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-02-24  1:20                               ` Theodore Ts'o
@ 2013-02-26 18:41                                 ` Marcus Sundman
  2013-02-26 22:17                                   ` Theodore Ts'o
  2013-02-26 23:17                                   ` Jan Kara
  0 siblings, 2 replies; 34+ messages in thread
From: Marcus Sundman @ 2013-02-26 18:41 UTC (permalink / raw)
  To: Theodore Ts'o, Dave Chinner, Jan Kara, linux-kernel

On 24.02.2013 03:20, Theodore Ts'o wrote:
> On Sun, Feb 24, 2013 at 11:12:22AM +1100, Dave Chinner wrote:
>>>> /dev/sda6 /home ext4 rw,noatime,discard 0 0
>>                                     ^^^^^^^
>> I'd say that's your problem....
> Looks like the Sandisk U100 is a good SSD for me to put on my personal
> "avoid" list:
>
> http://thessdreview.com/our-reviews/asus-zenbook-ssd-review-not-necessarily-sandforce-driven-shows-significant-speed-bump/
>
> There are a number of SSD's which do not implement "trim" efficiently,
> so these days, the recommended way to use trim is to run the "fstrim"
> command out of crontab.

OK. Removing 'discard' made it much better (the 60-600 second freezes 
are now 1-50 second freezes), but it's still at least an order of 
magnitude worse than a normal HD. When writing, that is -- reading is 
very fast (when there's no writing going on).

So, after reading up a bit on this trimming I'm thinking maybe my 
filesystem's block sizes don't match up with my SSD's blocks (or 
whatever its write unit is called). Then writing a FS block would always 
write to multiple SSD blocks, causing multiple read-erase-write 
sequences, right? So how can I check this, and how can I make the FS 
blocks match the SSD blocks?

Best regards,
Marcus


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-02-26 18:41                                 ` Marcus Sundman
@ 2013-02-26 22:17                                   ` Theodore Ts'o
  2013-02-26 23:17                                   ` Jan Kara
  1 sibling, 0 replies; 34+ messages in thread
From: Theodore Ts'o @ 2013-02-26 22:17 UTC (permalink / raw)
  To: Marcus Sundman; +Cc: Dave Chinner, Jan Kara, linux-kernel

On Tue, Feb 26, 2013 at 08:41:36PM +0200, Marcus Sundman wrote:
> 
> So, after reading up a bit on this trimming I'm thinking maybe my
> filesystem's block sizes don't match up with my SSD's blocks (or
> whatever its write unit is called). Then writing a FS block would
> always write to multiple SSD blocks, causing multiple
> read-erase-write sequences, right? So how can I check this, and how
> can I make the FS blocks match the SSD blocks?

The erase block size for SSD's is typically in the area of 2MB (that's
megabytes), with a page size of typically 4k, 8k, or 16k.  So that
means that erases take place with a granularity of 2 megabytes, and
writes take place in chunks of the page size.  It's up to the Flash
Translation Layer to take the writes and map them to the NAND flash in
an efficient way.  This is the difference between high quality SSD's
and really crappy SSD's.  One of the best ways of measuring how good
your SSD is to do random 4k write test, and see how well it handles a
random write workload.  I'm guessing you have a SSD which is really
terrible at this.

In general you don't need to worry about alignment for most SSD's
(eMMC/SD devices are a different story) since historically, Windows
systems had partition offset by 63 (512 byte) sectors, which is the
worst possible alignment.  So SSD's in general can handle misaligned
writes without any problems, or otherwise on Windows XP systems, their
performance would be really crappy.  SD card or eMMC devices don't
deal with this well, so you need to worry about aligning your
partitions appropriately.  If your SSD is sensitive to partition
alrignment, then it truly is a really crappy SSD.  My suggestion to
you is that the next time you buy an SSD, take a look at the reviews
at web sites such as Anandtech, and in particular take a look at the
4k random write benchmark numbers and see how they compare with the
competition.

As far as what you should do with your current SSD, if it's really
that bad, I'm not sure I'd trust precious data on it, and I'd
seriously consider simply getting a new SSD if budget allows this.
Intel has historically does a really good job with their QA.  They
spent several months qual'ing the Sandforce controller, and so they
were late to the market as a result, and their SSDs are generally a
bit more expensive.  However, the agreement they signed with Sandforce
meant that the reliability/performance fixes in the Sandforce firmware
which were the result of Intel's extended QA period would remain
exclusive to Intel SSD's for some period of time and hence wouldn't be
available to their competition.  Guess which manufacturer's SSDs I
generally tend to buy?  :-)

Regards,

						- Ted

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-02-26 18:41                                 ` Marcus Sundman
  2013-02-26 22:17                                   ` Theodore Ts'o
@ 2013-02-26 23:17                                   ` Jan Kara
  2013-09-12 12:57                                     ` Marcus Sundman
  1 sibling, 1 reply; 34+ messages in thread
From: Jan Kara @ 2013-02-26 23:17 UTC (permalink / raw)
  To: Marcus Sundman; +Cc: Theodore Ts'o, Dave Chinner, Jan Kara, linux-kernel

On Tue 26-02-13 20:41:36, Marcus Sundman wrote:
> On 24.02.2013 03:20, Theodore Ts'o wrote:
> >On Sun, Feb 24, 2013 at 11:12:22AM +1100, Dave Chinner wrote:
> >>>>/dev/sda6 /home ext4 rw,noatime,discard 0 0
> >>                                    ^^^^^^^
> >>I'd say that's your problem....
> >Looks like the Sandisk U100 is a good SSD for me to put on my personal
> >"avoid" list:
> >
> >http://thessdreview.com/our-reviews/asus-zenbook-ssd-review-not-necessarily-sandforce-driven-shows-significant-speed-bump/
> >
> >There are a number of SSD's which do not implement "trim" efficiently,
> >so these days, the recommended way to use trim is to run the "fstrim"
> >command out of crontab.
> 
> OK. Removing 'discard' made it much better (the 60-600 second
> freezes are now 1-50 second freezes), but it's still at least an
> order of magnitude worse than a normal HD. When writing, that is --
> reading is very fast (when there's no writing going on).
> 
> So, after reading up a bit on this trimming I'm thinking maybe my
> filesystem's block sizes don't match up with my SSD's blocks (or
> whatever its write unit is called). Then writing a FS block would
> always write to multiple SSD blocks, causing multiple
> read-erase-write sequences, right? So how can I check this, and how
> can I make the FS blocks match the SSD blocks?
  As Ted wrote, alignment isn't usually a problem with SSDs. And even if it
was, it would be at most a factor 2 slow down and we don't seem to be at
that fine grained level :)

At this point you might try mounting the fs with nobarrier mount option (I
know you tried that before but without discard the difference could be more
visible), switching IO scheduler to CFQ (for crappy SSDs it actually isn't
a bad choice), and we'll see how much we can squeeze out of your drive...

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-02-26 23:17                                   ` Jan Kara
@ 2013-09-12 12:57                                     ` Marcus Sundman
  2013-09-12 13:10                                       ` Jan Kara
  0 siblings, 1 reply; 34+ messages in thread
From: Marcus Sundman @ 2013-09-12 12:57 UTC (permalink / raw)
  To: Jan Kara; +Cc: Theodore Ts'o, Dave Chinner, linux-kernel

On 27.02.2013 01:17, Jan Kara wrote:
> On Tue 26-02-13 20:41:36, Marcus Sundman wrote:
>> On 24.02.2013 03:20, Theodore Ts'o wrote:
>>> On Sun, Feb 24, 2013 at 11:12:22AM +1100, Dave Chinner wrote:
>>>>>> /dev/sda6 /home ext4 rw,noatime,discard 0 0
>>>>                                     ^^^^^^^
>>>> I'd say that's your problem....
>>> Looks like the Sandisk U100 is a good SSD for me to put on my personal
>>> "avoid" list:
>>>
>>> http://thessdreview.com/our-reviews/asus-zenbook-ssd-review-not-necessarily-sandforce-driven-shows-significant-speed-bump/
>>>
>>> There are a number of SSD's which do not implement "trim" efficiently,
>>> so these days, the recommended way to use trim is to run the "fstrim"
>>> command out of crontab.
>> OK. Removing 'discard' made it much better (the 60-600 second
>> freezes are now 1-50 second freezes), but it's still at least an
>> order of magnitude worse than a normal HD. When writing, that is --
>> reading is very fast (when there's no writing going on).
>>
>> So, after reading up a bit on this trimming I'm thinking maybe my
>> filesystem's block sizes don't match up with my SSD's blocks (or
>> whatever its write unit is called). Then writing a FS block would
>> always write to multiple SSD blocks, causing multiple
>> read-erase-write sequences, right? So how can I check this, and how
>> can I make the FS blocks match the SSD blocks?
>    As Ted wrote, alignment isn't usually a problem with SSDs. And even if it
> was, it would be at most a factor 2 slow down and we don't seem to be at
> that fine grained level :)
>
> At this point you might try mounting the fs with nobarrier mount option (I
> know you tried that before but without discard the difference could be more
> visible), switching IO scheduler to CFQ (for crappy SSDs it actually isn't
> a bad choice), and we'll see how much we can squeeze out of your drive...

I repartitioned the drive and reinstalled ubuntu and after that it 
gladly wrote over 100 MB/s to the SSD without any hangs. However, after 
a couple of months I noticed it had degraded considerably, and it keeps 
degrading. Now it's slowly becoming completely unusable again, with 
write speeds of the magnitude 1 MB/s and dropping.

As far as I can tell I have not made any relevant changes. Also, the 
amount of free space hasn't changed considerably, but it seems that the 
longer it's been since I reformatted the drive the more free space is 
required for it to perform well.

So, maybe the cause is fragmentation? I tried running e4defrag and then 
fstrim, but it didn't really help (well, maybe a little bit, but after a 
couple of days it was back in unusable-land). Also, "e4defrag -c" gives 
a fragmenation score of less than 5, so...

Any ideas?


Best regards,
Marcus

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-09-12 12:57                                     ` Marcus Sundman
@ 2013-09-12 13:10                                       ` Jan Kara
  2013-09-12 13:47                                         ` Marcus Sundman
  0 siblings, 1 reply; 34+ messages in thread
From: Jan Kara @ 2013-09-12 13:10 UTC (permalink / raw)
  To: Marcus Sundman; +Cc: Jan Kara, Theodore Ts'o, Dave Chinner, linux-kernel

On Thu 12-09-13 15:57:32, Marcus Sundman wrote:
> On 27.02.2013 01:17, Jan Kara wrote:
> >On Tue 26-02-13 20:41:36, Marcus Sundman wrote:
> >>On 24.02.2013 03:20, Theodore Ts'o wrote:
> >>>On Sun, Feb 24, 2013 at 11:12:22AM +1100, Dave Chinner wrote:
> >>>>>>/dev/sda6 /home ext4 rw,noatime,discard 0 0
> >>>>                                    ^^^^^^^
> >>>>I'd say that's your problem....
> >>>Looks like the Sandisk U100 is a good SSD for me to put on my personal
> >>>"avoid" list:
> >>>
> >>>http://thessdreview.com/our-reviews/asus-zenbook-ssd-review-not-necessarily-sandforce-driven-shows-significant-speed-bump/
> >>>
> >>>There are a number of SSD's which do not implement "trim" efficiently,
> >>>so these days, the recommended way to use trim is to run the "fstrim"
> >>>command out of crontab.
> >>OK. Removing 'discard' made it much better (the 60-600 second
> >>freezes are now 1-50 second freezes), but it's still at least an
> >>order of magnitude worse than a normal HD. When writing, that is --
> >>reading is very fast (when there's no writing going on).
> >>
> >>So, after reading up a bit on this trimming I'm thinking maybe my
> >>filesystem's block sizes don't match up with my SSD's blocks (or
> >>whatever its write unit is called). Then writing a FS block would
> >>always write to multiple SSD blocks, causing multiple
> >>read-erase-write sequences, right? So how can I check this, and how
> >>can I make the FS blocks match the SSD blocks?
> >   As Ted wrote, alignment isn't usually a problem with SSDs. And even if it
> >was, it would be at most a factor 2 slow down and we don't seem to be at
> >that fine grained level :)
> >
> >At this point you might try mounting the fs with nobarrier mount option (I
> >know you tried that before but without discard the difference could be more
> >visible), switching IO scheduler to CFQ (for crappy SSDs it actually isn't
> >a bad choice), and we'll see how much we can squeeze out of your drive...
> 
> I repartitioned the drive and reinstalled ubuntu and after that it
> gladly wrote over 100 MB/s to the SSD without any hangs. However,
> after a couple of months I noticed it had degraded considerably, and
> it keeps degrading. Now it's slowly becoming completely unusable
> again, with write speeds of the magnitude 1 MB/s and dropping.
> 
> As far as I can tell I have not made any relevant changes. Also, the
> amount of free space hasn't changed considerably, but it seems that
> the longer it's been since I reformatted the drive the more free
> space is required for it to perform well.
> 
> So, maybe the cause is fragmentation? I tried running e4defrag and
> then fstrim, but it didn't really help (well, maybe a little bit,
> but after a couple of days it was back in unusable-land). Also,
> "e4defrag -c" gives a fragmenation score of less than 5, so...
> 
> Any ideas?
  So now you run without 'discard' mount option, right? My guess then would
be that the FTL layer on your SSD is just crappy and as the erase blocks
get more fragmented as the filesystem is used it cannot keep up. But it's
easy to put blame on someone else :)

You can check whether this is a problem of Linux or your SSD by writing a
large file (few GB or more) like 'dd if=/dev/zero of=testfile bs=1M
count=4096 oflag=direct'. What is the throughput? If it is bad, check output
of 'filefrag -v testfile'. If the extents are reasonably large (1 MB and
more), then the problem is in your SSD firmware. Not much we can do about
it in that case...

If it really is SSD's firmware, maybe you could try f2fs or similar flash
oriented filesystem which should put lower load on the disk's FTL.

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-09-12 13:10                                       ` Jan Kara
@ 2013-09-12 13:47                                         ` Marcus Sundman
  2013-09-12 14:39                                           ` Jan Kara
  0 siblings, 1 reply; 34+ messages in thread
From: Marcus Sundman @ 2013-09-12 13:47 UTC (permalink / raw)
  To: Jan Kara; +Cc: Theodore Ts'o, Dave Chinner, linux-kernel

On 12.09.2013 16:10, Jan Kara wrote:
> On Thu 12-09-13 15:57:32, Marcus Sundman wrote:
>> On 27.02.2013 01:17, Jan Kara wrote:
>>> On Tue 26-02-13 20:41:36, Marcus Sundman wrote:
>>>> On 24.02.2013 03:20, Theodore Ts'o wrote:
>>>>> On Sun, Feb 24, 2013 at 11:12:22AM +1100, Dave Chinner wrote:
>>>>>>>> /dev/sda6 /home ext4 rw,noatime,discard 0 0
>>>>>>                                     ^^^^^^^
>>>>>> I'd say that's your problem....
>>>>> Looks like the Sandisk U100 is a good SSD for me to put on my personal
>>>>> "avoid" list:
>>>>>
>>>>> http://thessdreview.com/our-reviews/asus-zenbook-ssd-review-not-necessarily-sandforce-driven-shows-significant-speed-bump/
>>>>>
>>>>> There are a number of SSD's which do not implement "trim" efficiently,
>>>>> so these days, the recommended way to use trim is to run the "fstrim"
>>>>> command out of crontab.
>>>> OK. Removing 'discard' made it much better (the 60-600 second
>>>> freezes are now 1-50 second freezes), but it's still at least an
>>>> order of magnitude worse than a normal HD. When writing, that is --
>>>> reading is very fast (when there's no writing going on).
>>>>
>>>> So, after reading up a bit on this trimming I'm thinking maybe my
>>>> filesystem's block sizes don't match up with my SSD's blocks (or
>>>> whatever its write unit is called). Then writing a FS block would
>>>> always write to multiple SSD blocks, causing multiple
>>>> read-erase-write sequences, right? So how can I check this, and how
>>>> can I make the FS blocks match the SSD blocks?
>>>    As Ted wrote, alignment isn't usually a problem with SSDs. And even if it
>>> was, it would be at most a factor 2 slow down and we don't seem to be at
>>> that fine grained level :)
>>>
>>> At this point you might try mounting the fs with nobarrier mount option (I
>>> know you tried that before but without discard the difference could be more
>>> visible), switching IO scheduler to CFQ (for crappy SSDs it actually isn't
>>> a bad choice), and we'll see how much we can squeeze out of your drive...
>> I repartitioned the drive and reinstalled ubuntu and after that it
>> gladly wrote over 100 MB/s to the SSD without any hangs. However,
>> after a couple of months I noticed it had degraded considerably, and
>> it keeps degrading. Now it's slowly becoming completely unusable
>> again, with write speeds of the magnitude 1 MB/s and dropping.
>>
>> As far as I can tell I have not made any relevant changes. Also, the
>> amount of free space hasn't changed considerably, but it seems that
>> the longer it's been since I reformatted the drive the more free
>> space is required for it to perform well.
>>
>> So, maybe the cause is fragmentation? I tried running e4defrag and
>> then fstrim, but it didn't really help (well, maybe a little bit,
>> but after a couple of days it was back in unusable-land). Also,
>> "e4defrag -c" gives a fragmenation score of less than 5, so...
>>
>> Any ideas?
>    So now you run without 'discard' mount option, right? My guess then would
> be that the FTL layer on your SSD is just crappy and as the erase blocks
> get more fragmented as the filesystem is used it cannot keep up. But it's
> easy to put blame on someone else :)
>
> You can check whether this is a problem of Linux or your SSD by writing a
> large file (few GB or more) like 'dd if=/dev/zero of=testfile bs=1M
> count=4096 oflag=direct'. What is the throughput? If it is bad, check output
> of 'filefrag -v testfile'. If the extents are reasonably large (1 MB and
> more), then the problem is in your SSD firmware. Not much we can do about
> it in that case...
>
> If it really is SSD's firmware, maybe you could try f2fs or similar flash
> oriented filesystem which should put lower load on the disk's FTL.

----8<---------------------------
$ grep LABEL /etc/fstab
LABEL=system    /        ext4    errors=remount-ro,nobarrier,noatime 0 1
LABEL=home    /home        ext4    defaults,nobarrier,noatime 0    2
$ df -h|grep home
/dev/sda3       104G   98G  5.1G  96% /home
$ sync && time dd if=/dev/zero of=testfile bs=1M count=2048 oflag=direct 
&& time sync
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 404.571 s, 5.3 MB/s

real    6m44.575s
user    0m0.000s
sys    0m1.300s

real    0m0.111s
user    0m0.000s
sys    0m0.004s
$ filefrag -v testfile
Filesystem type is: ef53
File size of testfile is 2147483648 (524288 blocks, blocksize 4096)
  ext logical physical expected length flags
    0       0 21339392             512
  [... http://sundman.iki.fi/extents.txt ...]
  282  523520  1618176  1568000    768 eof
testfile: 282 extents found
$
----8<---------------------------

Many extents are around 400 blocks(?) -- is this good or bad? (This 
partition has a fragmentation score of 0 according to e4defrag.)


Regards,
Marcus


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-09-12 13:47                                         ` Marcus Sundman
@ 2013-09-12 14:39                                           ` Jan Kara
  2013-09-12 15:08                                             ` Marcus Sundman
  0 siblings, 1 reply; 34+ messages in thread
From: Jan Kara @ 2013-09-12 14:39 UTC (permalink / raw)
  To: Marcus Sundman; +Cc: Jan Kara, Theodore Ts'o, Dave Chinner, linux-kernel

On Thu 12-09-13 16:47:43, Marcus Sundman wrote:
> On 12.09.2013 16:10, Jan Kara wrote:
> >On Thu 12-09-13 15:57:32, Marcus Sundman wrote:
> >>On 27.02.2013 01:17, Jan Kara wrote:
> >>>On Tue 26-02-13 20:41:36, Marcus Sundman wrote:
> >>>>On 24.02.2013 03:20, Theodore Ts'o wrote:
> >>>>>On Sun, Feb 24, 2013 at 11:12:22AM +1100, Dave Chinner wrote:
> >>>>>>>>/dev/sda6 /home ext4 rw,noatime,discard 0 0
> >>>>>>                                    ^^^^^^^
> >>>>>>I'd say that's your problem....
> >>>>>Looks like the Sandisk U100 is a good SSD for me to put on my personal
> >>>>>"avoid" list:
> >>>>>
> >>>>>http://thessdreview.com/our-reviews/asus-zenbook-ssd-review-not-necessarily-sandforce-driven-shows-significant-speed-bump/
> >>>>>
> >>>>>There are a number of SSD's which do not implement "trim" efficiently,
> >>>>>so these days, the recommended way to use trim is to run the "fstrim"
> >>>>>command out of crontab.
> >>>>OK. Removing 'discard' made it much better (the 60-600 second
> >>>>freezes are now 1-50 second freezes), but it's still at least an
> >>>>order of magnitude worse than a normal HD. When writing, that is --
> >>>>reading is very fast (when there's no writing going on).
> >>>>
> >>>>So, after reading up a bit on this trimming I'm thinking maybe my
> >>>>filesystem's block sizes don't match up with my SSD's blocks (or
> >>>>whatever its write unit is called). Then writing a FS block would
> >>>>always write to multiple SSD blocks, causing multiple
> >>>>read-erase-write sequences, right? So how can I check this, and how
> >>>>can I make the FS blocks match the SSD blocks?
> >>>   As Ted wrote, alignment isn't usually a problem with SSDs. And even if it
> >>>was, it would be at most a factor 2 slow down and we don't seem to be at
> >>>that fine grained level :)
> >>>
> >>>At this point you might try mounting the fs with nobarrier mount option (I
> >>>know you tried that before but without discard the difference could be more
> >>>visible), switching IO scheduler to CFQ (for crappy SSDs it actually isn't
> >>>a bad choice), and we'll see how much we can squeeze out of your drive...
> >>I repartitioned the drive and reinstalled ubuntu and after that it
> >>gladly wrote over 100 MB/s to the SSD without any hangs. However,
> >>after a couple of months I noticed it had degraded considerably, and
> >>it keeps degrading. Now it's slowly becoming completely unusable
> >>again, with write speeds of the magnitude 1 MB/s and dropping.
> >>
> >>As far as I can tell I have not made any relevant changes. Also, the
> >>amount of free space hasn't changed considerably, but it seems that
> >>the longer it's been since I reformatted the drive the more free
> >>space is required for it to perform well.
> >>
> >>So, maybe the cause is fragmentation? I tried running e4defrag and
> >>then fstrim, but it didn't really help (well, maybe a little bit,
> >>but after a couple of days it was back in unusable-land). Also,
> >>"e4defrag -c" gives a fragmenation score of less than 5, so...
> >>
> >>Any ideas?
> >   So now you run without 'discard' mount option, right? My guess then would
> >be that the FTL layer on your SSD is just crappy and as the erase blocks
> >get more fragmented as the filesystem is used it cannot keep up. But it's
> >easy to put blame on someone else :)
> >
> >You can check whether this is a problem of Linux or your SSD by writing a
> >large file (few GB or more) like 'dd if=/dev/zero of=testfile bs=1M
> >count=4096 oflag=direct'. What is the throughput? If it is bad, check output
> >of 'filefrag -v testfile'. If the extents are reasonably large (1 MB and
> >more), then the problem is in your SSD firmware. Not much we can do about
> >it in that case...
> >
> >If it really is SSD's firmware, maybe you could try f2fs or similar flash
> >oriented filesystem which should put lower load on the disk's FTL.
> 
> ----8<---------------------------
> $ grep LABEL /etc/fstab
> LABEL=system    /        ext4    errors=remount-ro,nobarrier,noatime 0 1
> LABEL=home    /home        ext4    defaults,nobarrier,noatime 0    2
> $ df -h|grep home
> /dev/sda3       104G   98G  5.1G  96% /home
> $ sync && time dd if=/dev/zero of=testfile bs=1M count=2048
> oflag=direct && time sync
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 404.571 s, 5.3 MB/s
> 
> real    6m44.575s
> user    0m0.000s
> sys    0m1.300s
> 
> real    0m0.111s
> user    0m0.000s
> sys    0m0.004s
> $ filefrag -v testfile
> Filesystem type is: ef53
> File size of testfile is 2147483648 (524288 blocks, blocksize 4096)
>  ext logical physical expected length flags
>    0       0 21339392             512
>  [... http://sundman.iki.fi/extents.txt ...]
>  282  523520  1618176  1568000    768 eof
> testfile: 282 extents found
> $
> ----8<---------------------------
> 
> Many extents are around 400 blocks(?) -- is this good or bad? (This
> partition has a fragmentation score of 0 according to e4defrag.)
  The free space is somewhat fragmented but given how full the fs is this
is understandable. The extents are large enough that the drive shouldn't
have problems processing them better than at 5 MB/s (standard rotating disk
would achieve much better throughput with this layout I believe). So my
conclusion is that really FTL on your drive sucks (or possibly the drive
doesn't have enough "hidden" additional space to ease the load on FTL when
the disk gets full).

And with this full filesystem fstrim isn't going to help you because we can
trim only free blocks and there aren't that many of those. Sorry.

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-09-12 14:39                                           ` Jan Kara
@ 2013-09-12 15:08                                             ` Marcus Sundman
  2013-09-12 16:35                                               ` Jan Kara
  0 siblings, 1 reply; 34+ messages in thread
From: Marcus Sundman @ 2013-09-12 15:08 UTC (permalink / raw)
  To: Jan Kara; +Cc: Theodore Ts'o, Dave Chinner, linux-kernel

On 12.09.2013 17:39, Jan Kara wrote:
> On Thu 12-09-13 16:47:43, Marcus Sundman wrote:
>> On 12.09.2013 16:10, Jan Kara wrote:
>>>    So now you run without 'discard' mount option, right? My guess then would
>>> be that the FTL layer on your SSD is just crappy and as the erase blocks
>>> get more fragmented as the filesystem is used it cannot keep up. But it's
>>> easy to put blame on someone else :)
>>>
>>> You can check whether this is a problem of Linux or your SSD by writing a
>>> large file (few GB or more) like 'dd if=/dev/zero of=testfile bs=1M
>>> count=4096 oflag=direct'. What is the throughput? If it is bad, check output
>>> of 'filefrag -v testfile'. If the extents are reasonably large (1 MB and
>>> more), then the problem is in your SSD firmware. Not much we can do about
>>> it in that case...
>>>
>>> If it really is SSD's firmware, maybe you could try f2fs or similar flash
>>> oriented filesystem which should put lower load on the disk's FTL.
>> ----8<---------------------------
>> $ grep LABEL /etc/fstab
>> LABEL=system    /        ext4    errors=remount-ro,nobarrier,noatime 0 1
>> LABEL=home    /home        ext4    defaults,nobarrier,noatime 0    2
>> $ df -h|grep home
>> /dev/sda3       104G   98G  5.1G  96% /home
>> $ sync && time dd if=/dev/zero of=testfile bs=1M count=2048
>> oflag=direct && time sync
>> 2048+0 records in
>> 2048+0 records out
>> 2147483648 bytes (2.1 GB) copied, 404.571 s, 5.3 MB/s
>>
>> real    6m44.575s
>> user    0m0.000s
>> sys    0m1.300s
>>
>> real    0m0.111s
>> user    0m0.000s
>> sys    0m0.004s
>> $ filefrag -v testfile
>> Filesystem type is: ef53
>> File size of testfile is 2147483648 (524288 blocks, blocksize 4096)
>>   ext logical physical expected length flags
>>     0       0 21339392             512
>>   [... http://sundman.iki.fi/extents.txt ...]
>>   282  523520  1618176  1568000    768 eof
>> testfile: 282 extents found
>> $
>> ----8<---------------------------
>>
>> Many extents are around 400 blocks(?) -- is this good or bad? (This
>> partition has a fragmentation score of 0 according to e4defrag.)
>    The free space is somewhat fragmented but given how full the fs is this
> is understandable. The extents are large enough that the drive shouldn't
> have problems processing them better than at 5 MB/s (standard rotating disk
> would achieve much better throughput with this layout I believe). So my
> conclusion is that really FTL on your drive sucks (or possibly the drive
> doesn't have enough "hidden" additional space to ease the load on FTL when
> the disk gets full).
>
> And with this full filesystem fstrim isn't going to help you because we can
> trim only free blocks and there aren't that many of those. Sorry.

OK, but why does it become worse over time?
And can I somehow "reset" whatever it is that is making it worse so that 
it becomes good again? That way I could spend maybe 1 hour once every 
few months to get it back to top speed.
Any other ideas how I could make this (very expensive and fairly new 
ZenBook) laptop usable?
Also, why doesn't this happen with USB memory sticks?

And many thanks for all your help with this issue! And thanks also to 
Sprouse and Ts'o!


Best regards,
Marcus


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-09-12 15:08                                             ` Marcus Sundman
@ 2013-09-12 16:35                                               ` Jan Kara
  2013-09-12 17:59                                                 ` Marcus Sundman
  0 siblings, 1 reply; 34+ messages in thread
From: Jan Kara @ 2013-09-12 16:35 UTC (permalink / raw)
  To: Marcus Sundman; +Cc: Jan Kara, Theodore Ts'o, Dave Chinner, linux-kernel

On Thu 12-09-13 18:08:13, Marcus Sundman wrote:
> On 12.09.2013 17:39, Jan Kara wrote:
> >On Thu 12-09-13 16:47:43, Marcus Sundman wrote:
> >>On 12.09.2013 16:10, Jan Kara wrote:
> >>>   So now you run without 'discard' mount option, right? My guess then would
> >>>be that the FTL layer on your SSD is just crappy and as the erase blocks
> >>>get more fragmented as the filesystem is used it cannot keep up. But it's
> >>>easy to put blame on someone else :)
> >>>
> >>>You can check whether this is a problem of Linux or your SSD by writing a
> >>>large file (few GB or more) like 'dd if=/dev/zero of=testfile bs=1M
> >>>count=4096 oflag=direct'. What is the throughput? If it is bad, check output
> >>>of 'filefrag -v testfile'. If the extents are reasonably large (1 MB and
> >>>more), then the problem is in your SSD firmware. Not much we can do about
> >>>it in that case...
> >>>
> >>>If it really is SSD's firmware, maybe you could try f2fs or similar flash
> >>>oriented filesystem which should put lower load on the disk's FTL.
> >>----8<---------------------------
> >>$ grep LABEL /etc/fstab
> >>LABEL=system    /        ext4    errors=remount-ro,nobarrier,noatime 0 1
> >>LABEL=home    /home        ext4    defaults,nobarrier,noatime 0    2
> >>$ df -h|grep home
> >>/dev/sda3       104G   98G  5.1G  96% /home
> >>$ sync && time dd if=/dev/zero of=testfile bs=1M count=2048
> >>oflag=direct && time sync
> >>2048+0 records in
> >>2048+0 records out
> >>2147483648 bytes (2.1 GB) copied, 404.571 s, 5.3 MB/s
> >>
> >>real    6m44.575s
> >>user    0m0.000s
> >>sys    0m1.300s
> >>
> >>real    0m0.111s
> >>user    0m0.000s
> >>sys    0m0.004s
> >>$ filefrag -v testfile
> >>Filesystem type is: ef53
> >>File size of testfile is 2147483648 (524288 blocks, blocksize 4096)
> >>  ext logical physical expected length flags
> >>    0       0 21339392             512
> >>  [... http://sundman.iki.fi/extents.txt ...]
> >>  282  523520  1618176  1568000    768 eof
> >>testfile: 282 extents found
> >>$
> >>----8<---------------------------
> >>
> >>Many extents are around 400 blocks(?) -- is this good or bad? (This
> >>partition has a fragmentation score of 0 according to e4defrag.)
> >   The free space is somewhat fragmented but given how full the fs is this
> >is understandable. The extents are large enough that the drive shouldn't
> >have problems processing them better than at 5 MB/s (standard rotating disk
> >would achieve much better throughput with this layout I believe). So my
> >conclusion is that really FTL on your drive sucks (or possibly the drive
> >doesn't have enough "hidden" additional space to ease the load on FTL when
> >the disk gets full).
> >
> >And with this full filesystem fstrim isn't going to help you because we can
> >trim only free blocks and there aren't that many of those. Sorry.
> 
> OK, but why does it become worse over time?
  So my theory is the following. Initially we begin with empty disk and the
firmware knows the disk is empty because mkfs.ext4 discards the whole disk
before creating the filesystem.  Thus FTL has relatively easy work when we
write a block because it has a plenty of unused erase blocks where block
can be stored. As the time passes and disk gets written, erase blocks get
more fragmented. After some time (especially when the disk is almost full),
each erase block has most of the blocks used and a couple of free blocks.
Thus when we write new block, FTL has to do a full read-modify-write cycle
of the whole erase block to write a single block.

Good SSDs have quite a bit of additional space over the declared size (I've
heard upto 50%) to make the erase block fragmentation problem (and also
lifetime of NAND flash) easier. Also the FTL can be more or less smart
regarding how to avoid fragmentation of erase blocks.

> And can I somehow "reset" whatever it is that is making it worse so
> that it becomes good again? That way I could spend maybe 1 hour once
> every few months to get it back to top speed.
> Any other ideas how I could make this (very expensive and fairly new
> ZenBook) laptop usable?
  Well, I believe if you used like 70% or less of the disk and regularly
(like once in a few days) run fstrim command, I belive the disk performance
should stay at a usable level.

> Also, why doesn't this happen with USB memory sticks?
  It does happen. Try running a distro from a USB stick. It is pretty slow.
Why you don't observe problems with USB sticks is that you don't use them
the way you use your / or /home. Usually you just write a big chunk of data
to the USB stick, it stays there for a while and then you delete it. This
is much easier on the FTL because all blocks in an erase block tend to have
the same lifetime and thus in most cases either the whole erase block is
used or free.

> And many thanks for all your help with this issue! And thanks also
> to Sprouse and Ts'o!
  You are welcome.

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-09-12 16:35                                               ` Jan Kara
@ 2013-09-12 17:59                                                 ` Marcus Sundman
  2013-09-12 20:46                                                   ` Jan Kara
  2013-09-14  2:41                                                   ` Theodore Ts'o
  0 siblings, 2 replies; 34+ messages in thread
From: Marcus Sundman @ 2013-09-12 17:59 UTC (permalink / raw)
  To: Jan Kara; +Cc: Theodore Ts'o, Dave Chinner, linux-kernel

On 12.09.2013 19:35, Jan Kara wrote:
> On Thu 12-09-13 18:08:13, Marcus Sundman wrote:
>> And can I somehow "reset" whatever it is that is making it worse so
>> that it becomes good again? That way I could spend maybe 1 hour once
>> every few months to get it back to top speed.
>> Any other ideas how I could make this (very expensive and fairly new
>> ZenBook) laptop usable?
>    Well, I believe if you used like 70% or less of the disk and regularly
> (like once in a few days) run fstrim command, I belive the disk performance
> should stay at a usable level.

At 128 GB it is extremely small as it is, and I'm really struggling to 
fit all on it. Most of my stuff is on my NAS (which has almost 10 TB 
space), but still I need several code repositories and the development 
environment and a virtual machine etc on this tiny 128 GB thing.

So, if I used some other filesystem, might that allow me to use a larger 
portion of the SSD without this degradation? Or with a much slower rate 
of degradation?

And at some point it will become unusable again, so what can I do then? 
If I move everything to my NAS (and maybe even re-create the 
filesystem?) and move everything back, might that get rid of the FTL 
fragmentation? Or could I somehow defragment the FTL without moving away 
everything?


Regards,
Marcus


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-09-12 17:59                                                 ` Marcus Sundman
@ 2013-09-12 20:46                                                   ` Jan Kara
  2013-09-13  6:35                                                     ` Marcus Sundman
  2013-09-14  2:41                                                   ` Theodore Ts'o
  1 sibling, 1 reply; 34+ messages in thread
From: Jan Kara @ 2013-09-12 20:46 UTC (permalink / raw)
  To: Marcus Sundman; +Cc: Jan Kara, Theodore Ts'o, Dave Chinner, linux-kernel

On Thu 12-09-13 20:59:07, Marcus Sundman wrote:
> On 12.09.2013 19:35, Jan Kara wrote:
> >On Thu 12-09-13 18:08:13, Marcus Sundman wrote:
> >>And can I somehow "reset" whatever it is that is making it worse so
> >>that it becomes good again? That way I could spend maybe 1 hour once
> >>every few months to get it back to top speed.
> >>Any other ideas how I could make this (very expensive and fairly new
> >>ZenBook) laptop usable?
> >   Well, I believe if you used like 70% or less of the disk and regularly
> >(like once in a few days) run fstrim command, I belive the disk performance
> >should stay at a usable level.
> 
> At 128 GB it is extremely small as it is, and I'm really struggling
> to fit all on it. Most of my stuff is on my NAS (which has almost 10
> TB space), but still I need several code repositories and the
> development environment and a virtual machine etc on this tiny 128
> GB thing.
  I see. I have like 70 GB disk and 50% of it are free :) But I have test
machines with much larger drives where I have VMs etc. This one is just
for email and coding.

> So, if I used some other filesystem, might that allow me to use a
> larger portion of the SSD without this degradation? Or with a much
> slower rate of degradation?
  You might try f2fs. That is designed for low end flash storage so it
might work better than ext4. But it is a new filesystem so backup often.

> And at some point it will become unusable again, so what can I do
> then? If I move everything to my NAS (and maybe even re-create the
> filesystem?) and move everything back, might that get rid of the FTL
> fragmentation?
  Yes, that should get rid of it. But since you have only a few GB free,
I'm afraid the fragmentation will reappear pretty quickly. But I guess it's
worth a try.

> Or could I somehow defragment the FTL without moving away everything?
  I don't know about such way.

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-09-12 20:46                                                   ` Jan Kara
@ 2013-09-13  6:35                                                     ` Marcus Sundman
  2013-09-13 20:54                                                       ` Jan Kara
  0 siblings, 1 reply; 34+ messages in thread
From: Marcus Sundman @ 2013-09-13  6:35 UTC (permalink / raw)
  To: Jan Kara; +Cc: Theodore Ts'o, Dave Chinner, linux-kernel

On 12.09.2013 23:46, Jan Kara wrote:
> On Thu 12-09-13 20:59:07, Marcus Sundman wrote:
>> On 12.09.2013 19:35, Jan Kara wrote:
>>> On Thu 12-09-13 18:08:13, Marcus Sundman wrote:
>>>> And can I somehow "reset" whatever it is that is making it worse so
>>>> that it becomes good again? That way I could spend maybe 1 hour once
>>>> every few months to get it back to top speed.
>>>> Any other ideas how I could make this (very expensive and fairly new
>>>> ZenBook) laptop usable?
>>>    Well, I believe if you used like 70% or less of the disk and regularly
>>> (like once in a few days) run fstrim command, I belive the disk performance
>>> should stay at a usable level.
>> At 128 GB it is extremely small as it is, and I'm really struggling
>> to fit all on it. Most of my stuff is on my NAS (which has almost 10
>> TB space), but still I need several code repositories and the
>> development environment and a virtual machine etc on this tiny 128
>> GB thing.
>    I see. I have like 70 GB disk and 50% of it are free :) But I have test
> machines with much larger drives where I have VMs etc. This one is just
> for email and coding.
>
>> So, if I used some other filesystem, might that allow me to use a
>> larger portion of the SSD without this degradation? Or with a much
>> slower rate of degradation?
>    You might try f2fs. That is designed for low end flash storage so it
> might work better than ext4. But it is a new filesystem so backup often.
>
>> And at some point it will become unusable again, so what can I do
>> then? If I move everything to my NAS (and maybe even re-create the
>> filesystem?) and move everything back, might that get rid of the FTL
>> fragmentation?
>    Yes, that should get rid of it. But since you have only a few GB free,
> I'm afraid the fragmentation will reappear pretty quickly. But I guess it's
> worth a try.
>
>> Or could I somehow defragment the FTL without moving away everything?
>    I don't know about such way.

How about triggering the garbage collection on the drive, is that possible?


- Marcus


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-09-13  6:35                                                     ` Marcus Sundman
@ 2013-09-13 20:54                                                       ` Jan Kara
  0 siblings, 0 replies; 34+ messages in thread
From: Jan Kara @ 2013-09-13 20:54 UTC (permalink / raw)
  To: Marcus Sundman; +Cc: Jan Kara, Theodore Ts'o, Dave Chinner, linux-kernel

On Fri 13-09-13 09:35:05, Marcus Sundman wrote:
> On 12.09.2013 23:46, Jan Kara wrote:
> >On Thu 12-09-13 20:59:07, Marcus Sundman wrote:
> >>On 12.09.2013 19:35, Jan Kara wrote:
> >>>On Thu 12-09-13 18:08:13, Marcus Sundman wrote:
> >>>>And can I somehow "reset" whatever it is that is making it worse so
> >>>>that it becomes good again? That way I could spend maybe 1 hour once
> >>>>every few months to get it back to top speed.
> >>>>Any other ideas how I could make this (very expensive and fairly new
> >>>>ZenBook) laptop usable?
> >>>   Well, I believe if you used like 70% or less of the disk and regularly
> >>>(like once in a few days) run fstrim command, I belive the disk performance
> >>>should stay at a usable level.
> >>At 128 GB it is extremely small as it is, and I'm really struggling
> >>to fit all on it. Most of my stuff is on my NAS (which has almost 10
> >>TB space), but still I need several code repositories and the
> >>development environment and a virtual machine etc on this tiny 128
> >>GB thing.
> >   I see. I have like 70 GB disk and 50% of it are free :) But I have test
> >machines with much larger drives where I have VMs etc. This one is just
> >for email and coding.
> >
> >>So, if I used some other filesystem, might that allow me to use a
> >>larger portion of the SSD without this degradation? Or with a much
> >>slower rate of degradation?
> >   You might try f2fs. That is designed for low end flash storage so it
> >might work better than ext4. But it is a new filesystem so backup often.
> >
> >>And at some point it will become unusable again, so what can I do
> >>then? If I move everything to my NAS (and maybe even re-create the
> >>filesystem?) and move everything back, might that get rid of the FTL
> >>fragmentation?
> >   Yes, that should get rid of it. But since you have only a few GB free,
> >I'm afraid the fragmentation will reappear pretty quickly. But I guess it's
> >worth a try.
> >
> >>Or could I somehow defragment the FTL without moving away everything?
> >   I don't know about such way.
> 
> How about triggering the garbage collection on the drive, is that possible?
  No, I don't know about any way to do that.

								Honza

-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-09-12 17:59                                                 ` Marcus Sundman
  2013-09-12 20:46                                                   ` Jan Kara
@ 2013-09-14  2:41                                                   ` Theodore Ts'o
  2013-09-15 19:19                                                     ` Marcus Sundman
  1 sibling, 1 reply; 34+ messages in thread
From: Theodore Ts'o @ 2013-09-14  2:41 UTC (permalink / raw)
  To: Marcus Sundman; +Cc: Jan Kara, Dave Chinner, linux-kernel

On Thu, Sep 12, 2013 at 08:59:07PM +0300, Marcus Sundman wrote:
> 
> At 128 GB it is extremely small as it is, and I'm really struggling
> to fit all on it. Most of my stuff is on my NAS (which has almost 10
> TB space), but still I need several code repositories and the
> development environment and a virtual machine etc on this tiny 128
> GB thing.
> 
> So, if I used some other filesystem, might that allow me to use a
> larger portion of the SSD without this degradation? Or with a much
> slower rate of degradation?

What model are you using?  It's possible that your flash device was
designed as a cache driver for windows.  As such, it might have been
optimized for a read-mostly workload and not something for a lot of
random small writes.

The f2fs file system is designed for crappy flash drives with crappy
FTL's, so it might work be better for you.  But let me ask you this
--- how much is your data worth?  How much would it cost to replace
your flash device with something better?

I tend to get very nervous with crappy storage devices, and it sounds
like your flash drive isn't a particularly good one.  I'd strongly
suggest doing regular backups, because when flash devices die, they
can die in extremely catastrphic ways.

Regards,

						- Ted

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-09-14  2:41                                                   ` Theodore Ts'o
@ 2013-09-15 19:19                                                     ` Marcus Sundman
  2013-09-16  0:06                                                       ` Theodore Ts'o
  0 siblings, 1 reply; 34+ messages in thread
From: Marcus Sundman @ 2013-09-15 19:19 UTC (permalink / raw)
  To: Theodore Ts'o, Jan Kara, Dave Chinner, linux-kernel

On 14.09.2013 05:41, Theodore Ts'o wrote:
> On Thu, Sep 12, 2013 at 08:59:07PM +0300, Marcus Sundman wrote:
>> At 128 GB it is extremely small as it is, and I'm really struggling
>> to fit all on it. Most of my stuff is on my NAS (which has almost 10
>> TB space), but still I need several code repositories and the
>> development environment and a virtual machine etc on this tiny 128
>> GB thing.
>>
>> So, if I used some other filesystem, might that allow me to use a
>> larger portion of the SSD without this degradation? Or with a much
>> slower rate of degradation?
> What model are you using?  It's possible that your flash device was
> designed as a cache driver for windows.  As such, it might have been
> optimized for a read-mostly workload and not something for a lot of
> random small writes.

It's a SanDisk SSD U100.

> The f2fs file system is designed for crappy flash drives with crappy
> FTL's, so it might work be better for you.

OK, I'll probably try it when I have time to switch.

> But let me ask you this
> --- how much is your data worth?  How much would it cost to replace
> your flash device with something better?

A lot. I have lsyncd running here most of the time, backing up to my 
raid-z NAS which in turn uses a versioned off-site backup system.
Anyway, this is an Asus ZenBook computer and can't be opened. Well, it 
can, but that will void the warranty at the very least.


- Marcus


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Debugging system freezes on filesystem writes
  2013-09-15 19:19                                                     ` Marcus Sundman
@ 2013-09-16  0:06                                                       ` Theodore Ts'o
  0 siblings, 0 replies; 34+ messages in thread
From: Theodore Ts'o @ 2013-09-16  0:06 UTC (permalink / raw)
  To: Marcus Sundman; +Cc: Jan Kara, Dave Chinner, linux-kernel

On Sun, Sep 15, 2013 at 10:19:41PM +0300, Marcus Sundman wrote:
> 
> It's a SanDisk SSD U100.

Here's one report of someone who had an experience with U100 in an
Asus UX31E:

https://communities.intel.com/thread/32515

> A lot. I have lsyncd running here most of the time, backing up to my
> raid-z NAS which in turn uses a versioned off-site backup system.
> Anyway, this is an Asus ZenBook computer and can't be opened. Well,
> it can, but that will void the warranty at the very least.

Personally, I avoid like the plague laptop computers where I can't
replace the storage devices (or upgrade memory, etc.) without voiding
the warrantee....

I generally order laptops without the flash device, and carefully
select for high quality flash devices.  Otherwise, you end up paying
too much for what is generally shoddy equipment....

					- Ted

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2013-09-16  0:06 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-10-28 22:39 Debugging system freezes on filesystem writes Marcus Sundman
2012-11-01 19:01 ` Jan Kara
2012-11-02  2:19   ` Marcus Sundman
2012-11-07 16:17     ` Jan Kara
2012-11-08 23:41       ` Marcus Sundman
2012-11-09 13:12         ` Marcus Sundman
2012-11-13 13:51           ` Jan Kara
2012-11-16  1:11             ` Marcus Sundman
2012-11-21 23:30               ` Jan Kara
2012-11-27 16:14                 ` Marcus Sundman
2012-12-05 15:32                   ` Jan Kara
2013-02-20  8:42                     ` Marcus Sundman
2013-02-20 11:40                       ` Marcus Sundman
2013-02-22 20:51                         ` Jan Kara
2013-02-22 23:27                           ` Marcus Sundman
2013-02-24  0:12                             ` Dave Chinner
2013-02-24  1:20                               ` Theodore Ts'o
2013-02-26 18:41                                 ` Marcus Sundman
2013-02-26 22:17                                   ` Theodore Ts'o
2013-02-26 23:17                                   ` Jan Kara
2013-09-12 12:57                                     ` Marcus Sundman
2013-09-12 13:10                                       ` Jan Kara
2013-09-12 13:47                                         ` Marcus Sundman
2013-09-12 14:39                                           ` Jan Kara
2013-09-12 15:08                                             ` Marcus Sundman
2013-09-12 16:35                                               ` Jan Kara
2013-09-12 17:59                                                 ` Marcus Sundman
2013-09-12 20:46                                                   ` Jan Kara
2013-09-13  6:35                                                     ` Marcus Sundman
2013-09-13 20:54                                                       ` Jan Kara
2013-09-14  2:41                                                   ` Theodore Ts'o
2013-09-15 19:19                                                     ` Marcus Sundman
2013-09-16  0:06                                                       ` Theodore Ts'o
2013-02-25 13:05                             ` Jan Kara

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).