All of lore.kernel.org
 help / color / mirror / Atom feed
* Contributing to NILFS
@ 2012-12-10 20:05 Andreas Rohner
  2012-12-11  6:46 ` Vyacheslav Dubeyko
  0 siblings, 1 reply; 11+ messages in thread
From: Andreas Rohner @ 2012-12-10 20:05 UTC (permalink / raw)
  To: linux-nilfs-u79uwXL29TY76Z2rM5mHXA

Hi,

I am a computer science student from Austria and I am looking for a
topic for my masters thesis. I am very interested in log-structured file
systems and I thought of doing a few things from the TODO list on the
website: http://www.nilfs.org/en/current_status.html
I am particularly interested in the "Online defrag" feature, but I
haven't looked into the source code yet. I have a few questions
concerning that and any help would be greatly appreciated:

1. Has someone already started working on it?
2. Is there some fundamental difficulty that makes it hard to implement
for a log-structured fs?
3. How much work would it entail? Is it doable for one well versed C
programmer in 2 to 3 months? 

best regards,
Andreas Rohner


--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Contributing to NILFS
  2012-12-10 20:05 Contributing to NILFS Andreas Rohner
@ 2012-12-11  6:46 ` Vyacheslav Dubeyko
  2012-12-11 13:54   ` Andreas Rohner
  0 siblings, 1 reply; 11+ messages in thread
From: Vyacheslav Dubeyko @ 2012-12-11  6:46 UTC (permalink / raw)
  To: Andreas Rohner; +Cc: linux-nilfs-u79uwXL29TY76Z2rM5mHXA

Hi Andreas,

On Mon, 2012-12-10 at 21:05 +0100, Andreas Rohner wrote:
> Hi,
> 
> I am a computer science student from Austria and I am looking for a
> topic for my masters thesis. I am very interested in log-structured file
> systems and I thought of doing a few things from the TODO list on the
> website: http://www.nilfs.org/en/current_status.html
> I am particularly interested in the "Online defrag" feature, but I
> haven't looked into the source code yet. I have a few questions
> concerning that and any help would be greatly appreciated:
> 
> 1. Has someone already started working on it?

As I know, you will be the first. :-)

> 2. Is there some fundamental difficulty that makes it hard to implement
> for a log-structured fs?

I think that the most fundamental possible issue can be a possible
performance degradation. But first of all, from my point of view, it
needs to discuss what the online defrag is and how it is possible to
implement it. What do you mean personally by online defrag? And how do
you imagine online defrag mechanism for NILFS2 in particular? When you
describe your understanding then it will be possible to discuss about
difficulties, I think. :-)

> 3. How much work would it entail? Is it doable for one well versed C
> programmer in 2 to 3 months? 
> 

I think that it is more easy to predict duration of some implementation
task when you know something about a developer. But, as I understand,
you don't familiar with NILFS2 source code. And how deep your experience
in Linux kernel implementation? So, it is not so easy to forecast
something. :-) I suggest to begin implementation. Anyway, it will be
very useful for your masters thesis, I think.

With the best regards,
Vyacheslav Dubeyko.

> best regards,
> Andreas Rohner
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Contributing to NILFS
  2012-12-11  6:46 ` Vyacheslav Dubeyko
@ 2012-12-11 13:54   ` Andreas Rohner
  2012-12-12  7:08     ` Vyacheslav Dubeyko
  0 siblings, 1 reply; 11+ messages in thread
From: Andreas Rohner @ 2012-12-11 13:54 UTC (permalink / raw)
  To: Vyacheslav Dubeyko; +Cc: linux-nilfs-u79uwXL29TY76Z2rM5mHXA

Hi Vyacheslav,

Thanks for your response.

> > 2. Is there some fundamental difficulty that makes it hard to implement
> > for a log-structured fs?
> 
> I think that the most fundamental possible issue can be a possible
> performance degradation. But first of all, from my point of view, it
> needs to discuss what the online defrag is and how it is possible to
> implement it. What do you mean personally by online defrag? And how do
> you imagine online defrag mechanism for NILFS2 in particular? When you
> describe your understanding then it will be possible to discuss about
> difficulties, I think. :-)

One way would be to just write out heavily fragmented files sequentially
and atomically switch to the new blocks. But as you suggested this
simple approach would probably result in performance degradation,
because it would eat up free segments and the segments of the old blocks
would contain more unusable free space, that has to be cleaned first.
This could result in an undesirable situation where most of the segments
are 60% full and for every clean segment the cleaner has to read in 4
half full segments. I think the difficult part is to find a suitable
heuristic to decide if it is beneficial to defragment a file or not. My
aim would be to produce as many clean or nearly clean segments as
possible in the process. I would try to implement and test different
heuristics and algorithms with differently aged file systems and compare
the results.

> > 3. How much work would it entail? Is it doable for one well versed C
> > programmer in 2 to 3 months? 
> > 
> 
> I think that it is more easy to predict duration of some implementation
> task when you know something about a developer. But, as I understand,
> you don't familiar with NILFS2 source code. And how deep your experience
> in Linux kernel implementation? So, it is not so easy to forecast
> something. :-) I suggest to begin implementation. Anyway, it will be
> very useful for your masters thesis, I think.

I am not familiar with the NILFS2 source code and I am not a kernel
developer, but I am very confident in my abilities as a C programmer. I
am more concerned that there is some huge obstacle to the
implementation, that I don't know of. 

best regards,
Andreas Rohner

--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Contributing to NILFS
  2012-12-11 13:54   ` Andreas Rohner
@ 2012-12-12  7:08     ` Vyacheslav Dubeyko
  2012-12-12 15:30       ` Sven-Göran Bergh
  2012-12-16 17:45       ` Andreas Rohner
  0 siblings, 2 replies; 11+ messages in thread
From: Vyacheslav Dubeyko @ 2012-12-12  7:08 UTC (permalink / raw)
  To: Andreas Rohner; +Cc: linux-nilfs-u79uwXL29TY76Z2rM5mHXA

Hi Andreas,

On Tue, 2012-12-11 at 14:54 +0100, Andreas Rohner wrote:
> Hi Vyacheslav,
> 
> Thanks for your response.
> 
> > > 2. Is there some fundamental difficulty that makes it hard to implement
> > > for a log-structured fs?
> > 
> > I think that the most fundamental possible issue can be a possible
> > performance degradation. But first of all, from my point of view, it
> > needs to discuss what the online defrag is and how it is possible to
> > implement it. What do you mean personally by online defrag? And how do
> > you imagine online defrag mechanism for NILFS2 in particular? When you
> > describe your understanding then it will be possible to discuss about
> > difficulties, I think. :-)
> 
> One way would be to just write out heavily fragmented files sequentially
> and atomically switch to the new blocks. But as you suggested this
> simple approach would probably result in performance degradation,
> because it would eat up free segments and the segments of the old blocks
> would contain more unusable free space, that has to be cleaned first.
> This could result in an undesirable situation where most of the segments
> are 60% full and for every clean segment the cleaner has to read in 4
> half full segments. I think the difficult part is to find a suitable
> heuristic to decide if it is beneficial to defragment a file or not. My
> aim would be to produce as many clean or nearly clean segments as
> possible in the process. I would try to implement and test different
> heuristics and algorithms with differently aged file systems and compare
> the results.
> 

I think that this task hides many difficult questions. How does it
define what files fragmented or not? How does it measure the
fragmentation degree? What fragmentation degree should be a basis for
defragmentation activity? When does it need to detect fragmentation and
how to keep this knowledge? How does it make defragmentation without
performance degradation?

As I understand, when we are talking about defragmentation then we
expect a performance enhancement as a result. But defragmenter activity
can be a background reason of performance degradation. Not every
workload or I/O pattern can be a reason of significant fragmentation.

Also, it is a very important to choose a point of defragmentation. I
mean that it is possible to try to prevent fragmentation or to correct
fragmentation after flushing on the volume. It is possible to have a
some hybrid technique, I think. An I/O pattern or file type can be a
basis for such decision, I think.

As I understand, F2FS [1] has some defragmenting approaches. I think
that it needs to discuss more deeply about technique of detecting
fragmented files and fragmentation degree. But maybe hot data tracking
patch [2,3] will be a basis for such discussion.

I think that it can be a useful some materials about NILFS2. I began a
design document for NILFS2 [4] but unfortunately it is not ended yet. It
was published a review of NILFS2 [5] not so recently.

It exists some defragmentation-related papers but I haven't
comprehensive list. I can mention about "The Effects of Filesystem
Fragmentation" [6]. Maybe it can be useful "A Five-Year Study of
File-System Metadata" [7] and "A File Is Not a File: Understanding the
I/O Behavior of Apple Desktop Applications" [8] papers.

So, I feel necessity to think more deeply about online defragment task
and about what you said. But, anyway, it is a beginning of
discussion. :-) 

[1] http://lwn.net/Articles/518988/
[2] http://lwn.net/Articles/525425/
[3] http://lwn.net/Articles/400029/
[4] http://dubeyko.com/development/FileSystems/NILFS/nilfs2-design.pdf
[5] http://lwn.net/Articles/522507/
[6] http://www.google.ru/url?sa=t&rct=j&q=the%20effects%20of%20filesystem%20fragmentation&source=web&cd=2&ved=0CD0QFjAB&url=http%3A%2F%2Fwww.kernel.org%2Fdoc%2Fols%2F2006%2Fols2006v1-pages-193-208.pdf&ei=6CnIUJeHHYqB4gS6l4GoCQ&usg=AFQjCNFLhxtq89VLzE_fLuX7CDDpk_1Krw&bvm=bv.1355272958,d.bGE&cad=rjt
[7] http://www.google.ru/url?sa=t&rct=j&q=a%20five-year%20study%20of%20file-system%20metadata&source=web&cd=1&ved=0CC0QFjAA&url=http%3A%2F%2Fresearch.microsoft.com%2Fpubs%2F72896%2Ffast07-final.pdf&ei=syvIULepFoWE4ASDmYDIBA&usg=AFQjCNE5mFDPqgEvGYe32RkNyVa2oxVxkw&bvm=bv.1355272958,d.bGE&cad=rjt
[8] http://www.google.ru/url?sa=t&rct=j&q=a%20file%20is%20not%20a%20file%3A%20understanding%20the%20i%2Fo%20behavior%20of%20apple%20desktop%20applications&source=web&cd=1&ved=0CC0QFjAA&url=http%3A%2F%2Fresearch.cs.wisc.edu%2Fwind%2FPublications%2Fibench-1c-sosp11.pdf&ei=NSzIUP3zCKqK4ASU5oCACw&usg=AFQjCNEFr-bw1Ke382_rQBYGQwI88MPkKg&bvm=bv.1355272958,d.bGE&cad=rjt

With the best regards,
Vyacheslav Dubeyko.


--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: Contributing to NILFS
  2012-12-12  7:08     ` Vyacheslav Dubeyko
@ 2012-12-12 15:30       ` Sven-Göran Bergh
       [not found]         ` <1355326242.67765.YahooMailNeo-mKBY30tKGRG2Y7dhQGSVAJOW+3bF1jUfVpNB7YpNyf8@public.gmane.org>
  2012-12-16 17:45       ` Andreas Rohner
  1 sibling, 1 reply; 11+ messages in thread
From: Sven-Göran Bergh @ 2012-12-12 15:30 UTC (permalink / raw)
  To: Vyacheslav Dubeyko, Andreas Rohner; +Cc: linux-nilfs-u79uwXL29TY76Z2rM5mHXA

Hi,

2012-12-12 08:08, Vyacheslav Dubeyko <slava-yeENwD64cLxBDgjK7y7TUQ@public.gmane.org>:

> Hi Andreas,
> 
> On Tue, 2012-12-11 at 14:54 +0100, Andreas Rohner wrote:
>>  Hi Vyacheslav,
>> 
>>  Thanks for your response.
>> 
>>  > > 2. Is there some fundamental difficulty that makes it hard to 
> implement
>>  > > for a log-structured fs?
>>  > 
>>  > I think that the most fundamental possible issue can be a possible
>>  > performance degradation. But first of all, from my point of view, it
>>  > needs to discuss what the online defrag is and how it is possible to
>>  > implement it. What do you mean personally by online defrag? And how do
>>  > you imagine online defrag mechanism for NILFS2 in particular? When you
>>  > describe your understanding then it will be possible to discuss about
>>  > difficulties, I think. :-)
>> 
>>  One way would be to just write out heavily fragmented files sequentially
>>  and atomically switch to the new blocks. But as you suggested this
>>  simple approach would probably result in performance degradation,
>>  because it would eat up free segments and the segments of the old blocks
>>  would contain more unusable free space, that has to be cleaned first.
>>  This could result in an undesirable situation where most of the segments
>>  are 60% full and for every clean segment the cleaner has to read in 4
>>  half full segments. I think the difficult part is to find a suitable
>>  heuristic to decide if it is beneficial to defragment a file or not. My
>>  aim would be to produce as many clean or nearly clean segments as
>>  possible in the process. I would try to implement and test different
>>  heuristics and algorithms with differently aged file systems and compare
>>  the results.
>> 
> 
> I think that this task hides many difficult questions. How does it
> define what files fragmented or not? How does it measure the
> fragmentation degree? What fragmentation degree should be a basis for
> defragmentation activity? When does it need to detect fragmentation and
> how to keep this knowledge? How does it make defragmentation without
> performance degradation?

These questions are of special interest if we through in the type of
media in the discussion. Fragmentation is not a big deal on NAND-based
media (SSD:s, memory cards, USB-sticks, etc). Defragmentation activity
might even shorten the lifetime for such media due to the limited amount
write/erase cycles.

Brgds
/S-G

> As I understand, when we are talking about defragmentation then we
> expect a performance enhancement as a result. But defragmenter activity
> can be a background reason of performance degradation. Not every
> workload or I/O pattern can be a reason of significant fragmentation.
> 
> Also, it is a very important to choose a point of defragmentation. I
> mean that it is possible to try to prevent fragmentation or to correct
> fragmentation after flushing on the volume. It is possible to have a
> some hybrid technique, I think. An I/O pattern or file type can be a
> basis for such decision, I think.
> 
> As I understand, F2FS [1] has some defragmenting approaches. I think
> that it needs to discuss more deeply about technique of detecting
> fragmented files and fragmentation degree. But maybe hot data tracking
> patch [2,3] will be a basis for such discussion.
> 
> I think that it can be a useful some materials about NILFS2. I began a
> design document for NILFS2 [4] but unfortunately it is not ended yet. It
> was published a review of NILFS2 [5] not so recently.
> 
> It exists some defragmentation-related papers but I haven't
> comprehensive list. I can mention about "The Effects of Filesystem
> Fragmentation" [6]. Maybe it can be useful "A Five-Year Study of
> File-System Metadata" [7] and "A File Is Not a File: Understanding the
> I/O Behavior of Apple Desktop Applications" [8] papers.
> 
> So, I feel necessity to think more deeply about online defragment task
> and about what you said. But, anyway, it is a beginning of
> discussion. :-) 
> 
> [1] http://lwn.net/Articles/518988/
> [2] http://lwn.net/Articles/525425/
> [3] http://lwn.net/Articles/400029/
> [4] http://dubeyko.com/development/FileSystems/NILFS/nilfs2-design.pdf
> [5] http://lwn.net/Articles/522507/
> [6] 
> http://www.google.ru/url?sa=t&rct=j&q=the%20effects%20of%20filesystem%20fragmentation&source=web&cd=2&ved=0CD0QFjAB&url=http%3A%2F%2Fwww.kernel.org%2Fdoc%2Fols%2F2006%2Fols2006v1-pages-193-208.pdf&ei=6CnIUJeHHYqB4gS6l4GoCQ&usg=AFQjCNFLhxtq89VLzE_fLuX7CDDpk_1Krw&bvm=bv.1355272958,d.bGE&cad=rjt
> [7] 
> http://www.google.ru/url?sa=t&rct=j&q=a%20five-year%20study%20of%20file-system%20metadata&source=web&cd=1&ved=0CC0QFjAA&url=http%3A%2F%2Fresearch.microsoft.com%2Fpubs%2F72896%2Ffast07-final.pdf&ei=syvIULepFoWE4ASDmYDIBA&usg=AFQjCNE5mFDPqgEvGYe32RkNyVa2oxVxkw&bvm=bv.1355272958,d.bGE&cad=rjt
> [8] 
> http://www.google.ru/url?sa=t&rct=j&q=a%20file%20is%20not%20a%20file%3A%20understanding%20the%20i%2Fo%20behavior%20of%20apple%20desktop%20applications&source=web&cd=1&ved=0CC0QFjAA&url=http%3A%2F%2Fresearch.cs.wisc.edu%2Fwind%2FPublications%2Fibench-1c-sosp11.pdf&ei=NSzIUP3zCKqK4ASU5oCACw&usg=AFQjCNEFr-bw1Ke382_rQBYGQwI88MPkKg&bvm=bv.1355272958,d.bGE&cad=rjt
> 
> With the best regards,
> Vyacheslav Dubeyko.
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nilfs" 
> in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Contributing to NILFS
       [not found]         ` <1355326242.67765.YahooMailNeo-mKBY30tKGRG2Y7dhQGSVAJOW+3bF1jUfVpNB7YpNyf8@public.gmane.org>
@ 2012-12-12 19:57           ` Vyacheslav Dubeyko
       [not found]             ` <706EE260-E8A2-410A-9211-FB4859516478-yeENwD64cLxBDgjK7y7TUQ@public.gmane.org>
  0 siblings, 1 reply; 11+ messages in thread
From: Vyacheslav Dubeyko @ 2012-12-12 19:57 UTC (permalink / raw)
  To: Sven-Göran Bergh; +Cc: Andreas Rohner, linux-nilfs-u79uwXL29TY76Z2rM5mHXA

Hi,

On Dec 12, 2012, at 6:30 PM, Sven-Göran Bergh wrote:

[snip]
>> 
>> I think that this task hides many difficult questions. How does it
>> define what files fragmented or not? How does it measure the
>> fragmentation degree? What fragmentation degree should be a basis for
>> defragmentation activity? When does it need to detect fragmentation and
>> how to keep this knowledge? How does it make defragmentation without
>> performance degradation?
> 
> These questions are of special interest if we through in the type of
> media in the discussion. Fragmentation is not a big deal on NAND-based
> media (SSD:s, memory cards, USB-sticks, etc). Defragmentation activity
> might even shorten the lifetime for such media due to the limited amount
> write/erase cycles.
> 

It is a good remark. Thank you.

Yes, of course, it needs to remember about NAND wearing, especially, in the case of online defragmenting. But, as you know, NILFS2 makes garbage collection. As I understand, GC can copy some blocks from cleaning segments into new ones. So, such copying also shorten NAND lifetime. Do you suggest not to make garbage collection because of it?

We have as minimum two points for online defragmenting: (1) before flushing; (2) during garbage collection. Thereby, if you make defragmenting before any write then you don't shorten NAND lifetime. We need to make garbage collection anyway. As a result, it is possible to use this activity for defragmenting also.

Yes, of course, NAND flash has good performance for the case of random reads. But the case of contiguous file's blocks is more better than fragmented case anyway. First of all, during reading of fragmented file you need to generate block address before every not sibling block's data reading. So, you will spend more cycles for read fragmented file than for the case of  contiguous file's blocks. Secondly, because of read disturbance the random read can force FTL to copy more erase blocks to a new ones and, as a result, to lead to more shorter NAND lifetime. Thirdly, fragmented volume state leads to more complex and unpredictable workloads with more intensive metadata operations. It can degrade filesystem performance. And, finally, GC has more harder work in the case of fragmented volume state, especially, for the case of presence of deleted files.

Thereby, I think that it makes sense to implement online defragmenting for the case of NILFS2. But, of course, it is a difficult and complex task because of probability to degrade performance and to shorten NAND lifetime.

With the best regards,
Vyacheslav Dubeyko.

> Brgds
> /S-G
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: Contributing to NILFS
       [not found]             ` <706EE260-E8A2-410A-9211-FB4859516478-yeENwD64cLxBDgjK7y7TUQ@public.gmane.org>
@ 2012-12-13 10:59               ` Sven-Göran Bergh
  0 siblings, 0 replies; 11+ messages in thread
From: Sven-Göran Bergh @ 2012-12-13 10:59 UTC (permalink / raw)
  To: Vyacheslav Dubeyko; +Cc: Andreas Rohner, linux-nilfs-u79uwXL29TY76Z2rM5mHXA



2012-12-12 20:57, Vyacheslav Dubeyko <slava-yeENwD64cLxBDgjK7y7TUQ@public.gmane.org>:

> Hi,
> 
> On Dec 12, 2012, at 6:30 PM, Sven-Göran Bergh wrote:
> 
> [snip]
>>> 
>>>  I think that this task hides many difficult questions. How does it
>>>  define what files fragmented or not? How does it measure the
>>>  fragmentation degree? What fragmentation degree should be a basis for
>>>  defragmentation activity? When does it need to detect fragmentation and
>>>  how to keep this knowledge? How does it make defragmentation without
>>>  performance degradation?
>> 
>>  These questions are of special interest if we through in the type of
>>  media in the discussion. Fragmentation is not a big deal on NAND-based
>>  media (SSD:s, memory cards, USB-sticks, etc). Defragmentation activity
>>  might even shorten the lifetime for such media due to the limited amount
>>  write/erase cycles.
>> 
> 
> It is a good remark. Thank you.
> 
> Yes, of course, it needs to remember about NAND wearing, especially, in the case 
> of online defragmenting. But, as you know, NILFS2 makes garbage collection. As I 
> understand, GC can copy some blocks from cleaning segments into new ones. So, 
> such copying also shorten NAND lifetime. Do you suggest not to make garbage 
> collection because of it?

No?, obviously not :-)

> We have as minimum two points for online defragmenting: (1) before flushing; (2) 
> during garbage collection. Thereby, if you make defragmenting before any write 
> then you don't shorten NAND lifetime. We need to make garbage collection 
> anyway. As a result, it is possible to use this activity for defragmenting also.
> 
> Yes, of course, NAND flash has good performance for the case of random reads. 
> But the case of contiguous file's blocks is more better than fragmented case 
> anyway. First of all, during reading of fragmented file you need to generate 
> block address before every not sibling block's data reading. So, you will 
> spend more cycles for read fragmented file than for the case of  contiguous 
> file's blocks. Secondly, because of read disturbance the random read can 
> force FTL to copy more erase blocks to a new ones and, as a result, to lead to 
> more shorter NAND lifetime. Thirdly, fragmented volume state leads to more 
> complex and unpredictable workloads with more intensive metadata operations. It 
> can degrade filesystem performance. And, finally, GC has more harder work in the 
> case of fragmented volume state, especially, for the case of presence of deleted 
> files.

Ok, seems like you misunderstood my previous statement. I do not argue against
defragmenting. However, there are many use cases and I just felt that NAND wearing
is of great importance as SSD:s are marching in. Thus, it should be part of the
discussion, so defrag is implemented in a NAND-friendly way. Trying to minimize
NAND wear as well. As you pointed out above, there are many parameters in the
equation and NAND is yet another one that needs to be considered.

> Thereby, I think that it makes sense to implement online defragmenting for the 
> case of NILFS2. But, of course, it is a difficult and complex task because of 
> probability to degrade performance and to shorten NAND lifetime.

Spot on! Totally agree.

Brgdrs
/S-G

> With the best regards,
> Vyacheslav Dubeyko.
> 
>>  Brgds
>>  /S-G
>> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nilfs" 
> in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Contributing to NILFS
  2012-12-12  7:08     ` Vyacheslav Dubeyko
  2012-12-12 15:30       ` Sven-Göran Bergh
@ 2012-12-16 17:45       ` Andreas Rohner
  2012-12-17  6:30         ` Vyacheslav Dubeyko
  1 sibling, 1 reply; 11+ messages in thread
From: Andreas Rohner @ 2012-12-16 17:45 UTC (permalink / raw)
  To: Vyacheslav Dubeyko; +Cc: linux-nilfs-u79uwXL29TY76Z2rM5mHXA

Hi Vyacheslav,

> I think that this task hides many difficult questions. How does it
> define what files fragmented or not? How does it measure the
> fragmentation degree? What fragmentation degree should be a basis for
> defragmentation activity? When does it need to detect fragmentation and
> how to keep this knowledge? How does it make defragmentation without
> performance degradation?
>
> As I understand, when we are talking about defragmentation then we
> expect a performance enhancement as a result. But defragmenter activity
> can be a background reason of performance degradation. Not every
> workload or I/O pattern can be a reason of significant fragmentation.
> 
> Also, it is a very important to choose a point of defragmentation. I
> mean that it is possible to try to prevent fragmentation or to correct
> fragmentation after flushing on the volume. It is possible to have a
> some hybrid technique, I think. An I/O pattern or file type can be a
> basis for such decision, I think.

Yes I agree. It is of course a good idea to reorder the data before
flushing and probably also to reorder it with the cleaner, but I
thought, that was already implemented and optimized. Is it?

Instead I imagined a tool like xfs_fsr for XFS. So the user can decide
when to defragment the file system, by running it manually or with a
cron job. Maybe this is a bit naive, since I probably don't know enough
about NILFS. Couldn't we just calculate the number of segments a file
uses if it is stored optimally and compare that to the actual number of
segments the file is spread out. For example, file A has 16MB. Lets
assume segments are of size 8MB. So (ignoring the metadata) file A
should use 2 segments. Now we count the different segments where the
blocks of file A really are, lets say 10, and calculate 1-(2/10)=0.8 So
it is 80% fragmented.

I wouldn't do that in the cleaner or in the background. Just a tool like
xfs_fsr, that the user can run once a month in the middle of the night
with a cron job. The tool would go through every file, calculate the
fragmentation and collect other statistics and decide if it is worth
defragmenting it or not.

If the user has a SSD he/she can decide not to defragment at all.

> As I understand, F2FS [1] has some defragmenting approaches. I think
> that it needs to discuss more deeply about technique of detecting
> fragmented files and fragmentation degree. But maybe hot data tracking
> patch [2,3] will be a basis for such discussion.

I did a quick search for F2FS defragmentation, but I couldn't find
anything. Did you mean this section of the article? "...it provides
large-scale write gathering so that when lots of blocks need to be
written at the same time they are collected into large sequential
writes..." Maybe I missed something, but isn't this just the inherent
property of a log-structured file system and not defragmentation?

Hot data tracking could be extremely useful for the cleaner. This paper
[1] suggests, that the best cleaner performance can be achieved by
distinguishing between hot and cold data. Is something like that already
implemented? Maybe I could do that for my masters thesis instead of the
defragmentation task... ;)

Thanks for the links. 

best regards,
Andreas Rohner

[1] http://www.cs.berkeley.edu/~brewer/cs262/LFS.pdf

--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Contributing to NILFS
  2012-12-16 17:45       ` Andreas Rohner
@ 2012-12-17  6:30         ` Vyacheslav Dubeyko
  2012-12-17 10:23           ` Andreas Rohner
  0 siblings, 1 reply; 11+ messages in thread
From: Vyacheslav Dubeyko @ 2012-12-17  6:30 UTC (permalink / raw)
  To: Andreas Rohner; +Cc: linux-nilfs-u79uwXL29TY76Z2rM5mHXA

Hi Andreas,

On Sun, 2012-12-16 at 18:45 +0100, Andreas Rohner wrote:

[snip]
> > 
> > Also, it is a very important to choose a point of defragmentation. I
> > mean that it is possible to try to prevent fragmentation or to correct
> > fragmentation after flushing on the volume. It is possible to have a
> > some hybrid technique, I think. An I/O pattern or file type can be a
> > basis for such decision, I think.
> 
> Yes I agree. It is of course a good idea to reorder the data before
> flushing and probably also to reorder it with the cleaner, but I
> thought, that was already implemented and optimized. Is it?
> 

I misunderstand slightly about what implementation you are talking.
Could you point out NILFS2 source code that implement this technique? As
I understand, if we have implemented data reordering before flush and
during the cleaning then it means that we have implemented online
defragmenting. But, if so, why this task is in TODO list?


> Instead I imagined a tool like xfs_fsr for XFS. So the user can decide
> when to defragment the file system, by running it manually or with a
> cron job.

If you are talking about user-space tool then it means that you are
talking about offline defragmenter. I think that offline defragmenter is
not so interesting for users. The most important objections are:

(1) Usually, NILFS2 is used for NAND-based devices (SSD, SD-card and so
on). So, as a result, offline defragmenter will decrease NAND lifetime
by means of its activity.

(2) Even if you will use NILFS2 on HDD then offline defragmenter will
decrease available free space by means of its operations because NILFS2
is log-structured file system. It means that every trying to write
results in writing into new free block (COW technique) and new segments
creations. So, the probability to exhaust free space by means of offline
defragmenter is very high.


> Maybe this is a bit naive, since I probably don't know enough
> about NILFS. Couldn't we just calculate the number of segments a file
> uses if it is stored optimally and compare that to the actual number of
> segments the file is spread out. For example, file A has 16MB. Lets
> assume segments are of size 8MB. So (ignoring the metadata) file A
> should use 2 segments. Now we count the different segments where the
> blocks of file A really are, lets say 10, and calculate 1-(2/10)=0.8 So
> it is 80% fragmented.
> 

I think that if parts of file are placed in sibling segments then it
doesn't make sense to do defragmenting. So, if you can detect some file
as fragmented by means of your technique then it is not possible to
decide about necessity to defragment. Moreover, how do you plan to
answer on such simple question: If you know block number then how to
detect what file contain it? 

> I wouldn't do that in the cleaner or in the background. Just a tool like
> xfs_fsr, that the user can run once a month in the middle of the night
> with a cron job. The tool would go through every file, calculate the
> fragmentation and collect other statistics and decide if it is worth
> defragmenting it or not.
> 
> If the user has a SSD he/she can decide not to defragment at all.
> 

I think that online defragmenter can be very useful for SSD case also.

> > As I understand, F2FS [1] has some defragmenting approaches. I think
> > that it needs to discuss more deeply about technique of detecting
> > fragmented files and fragmentation degree. But maybe hot data tracking
> > patch [2,3] will be a basis for such discussion.
> 
> I did a quick search for F2FS defragmentation, but I couldn't find
> anything. Did you mean this section of the article? "...it provides
> large-scale write gathering so that when lots of blocks need to be
> written at the same time they are collected into large sequential
> writes..." Maybe I missed something, but isn't this just the inherent
> property of a log-structured file system and not defragmentation?
> 

I meant that F2FS has architecture which it its basis contains
defragmenting opportunities, from my point of view. And I think that
this approaches can be a basis for online defragmenting technique
elaboration.

> Hot data tracking could be extremely useful for the cleaner. This paper
> [1] suggests, that the best cleaner performance can be achieved by
> distinguishing between hot and cold data. Is something like that already
> implemented? Maybe I could do that for my masters thesis instead of the
> defragmentation task... ;)
> 

The F2FS uses technique of distinguishing between hot and cold data very
deeply. It is a base technique of this filesystem.

With the best regards,
Vyacheslav Dubeyko.

> Thanks for the links. 
> 
> best regards,
> Andreas Rohner
> 
> [1] http://www.cs.berkeley.edu/~brewer/cs262/LFS.pdf
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Contributing to NILFS
  2012-12-17  6:30         ` Vyacheslav Dubeyko
@ 2012-12-17 10:23           ` Andreas Rohner
  2012-12-19  7:13             ` Vyacheslav Dubeyko
  0 siblings, 1 reply; 11+ messages in thread
From: Andreas Rohner @ 2012-12-17 10:23 UTC (permalink / raw)
  To: Vyacheslav Dubeyko; +Cc: linux-nilfs-u79uwXL29TY76Z2rM5mHXA

Hi,

> I misunderstand slightly about what implementation you are talking.
> Could you point out NILFS2 source code that implement this technique? As
> I understand, if we have implemented data reordering before flush and
> during the cleaning then it means that we have implemented online
> defragmenting. But, if so, why this task is in TODO list?

I guess I just assumed it. But I would connect these issues more with
the first item on the TODO-List "Smarter and more efficient Garbage
Collector".

> > Instead I imagined a tool like xfs_fsr for XFS. So the user can decide
> > when to defragment the file system, by running it manually or with a
> > cron job.
> 
> If you are talking about user-space tool then it means that you are
> talking about offline defragmenter. I think that offline defragmenter is
> not so interesting for users. The most important objections are:

I am sorry about the misunderstanding. I thought the term "online" just
means that the file system is mounted, while the defragmentation tool is
running. So offline defragmentation would be if you had to unmount the
file system for defragmentation. EXT4 [1] and XFS both do "online"
defragmentation with a user-space tool. I assumed, that the item on the
TODO-List means something similar. Such a tool could be useful to reduce
aging effects. It should be very conservative and probably not run every
day, but instead once a month.

[1] http://lwn.net/Articles/317787/

> (1) Usually, NILFS2 is used for NAND-based devices (SSD, SD-card and so
> on). So, as a result, offline defragmenter will decrease NAND lifetime
> by means of its activity.

Yes that is true. If most users use NAND-based devices such a tool would
be useless.

> (2) Even if you will use NILFS2 on HDD then offline defragmenter will
> decrease available free space by means of its operations because NILFS2
> is log-structured file system. It means that every trying to write
> results in writing into new free block (COW technique) and new segments
> creations. So, the probability to exhaust free space by means of offline
> defragmenter is very high.

Yes that is also true, I was talking about that in my second mail. The
defragmentation tool could try to avoid that as much as possible and
clean up after itself. But the latter would again decrease NAND
lifetime.

> 
> > Maybe this is a bit naive, since I probably don't know enough
> > about NILFS. Couldn't we just calculate the number of segments a file
> > uses if it is stored optimally and compare that to the actual number of
> > segments the file is spread out. For example, file A has 16MB. Lets
> > assume segments are of size 8MB. So (ignoring the metadata) file A
> > should use 2 segments. Now we count the different segments where the
> > blocks of file A really are, lets say 10, and calculate 1-(2/10)=0.8 So
> > it is 80% fragmented.
> > 
> 
> I think that if parts of file are placed in sibling segments then it
> doesn't make sense to do defragmenting. So, if you can detect some file
> as fragmented by means of your technique then it is not possible to
> decide about necessity to defragment. Moreover, how do you plan to
> answer on such simple question: If you know block number then how to
> detect what file contain it? 

Yes I agree if parts of the file are in sibling segments we should not
defragment.

About your second point, unfortunately I don't know enough about NILFS2
to answer that. I would have to study the source code first. But I trust
your assessment that its difficult.

> > I wouldn't do that in the cleaner or in the background. Just a tool like
> > xfs_fsr, that the user can run once a month in the middle of the night
> > with a cron job. The tool would go through every file, calculate the
> > fragmentation and collect other statistics and decide if it is worth
> > defragmenting it or not.
> > 
> > If the user has a SSD he/she can decide not to defragment at all.
> > 
> 
> I think that online defragmenter can be very useful for SSD case also.
> 
> > > As I understand, F2FS [1] has some defragmenting approaches. I think
> > > that it needs to discuss more deeply about technique of detecting
> > > fragmented files and fragmentation degree. But maybe hot data tracking
> > > patch [2,3] will be a basis for such discussion.
> > 
> > I did a quick search for F2FS defragmentation, but I couldn't find
> > anything. Did you mean this section of the article? "...it provides
> > large-scale write gathering so that when lots of blocks need to be
> > written at the same time they are collected into large sequential
> > writes..." Maybe I missed something, but isn't this just the inherent
> > property of a log-structured file system and not defragmentation?
> > 
> 
> I meant that F2FS has architecture which it its basis contains
> defragmenting opportunities, from my point of view. And I think that
> this approaches can be a basis for online defragmenting technique
> elaboration.
> 
> > Hot data tracking could be extremely useful for the cleaner. This paper
> > [1] suggests, that the best cleaner performance can be achieved by
> > distinguishing between hot and cold data. Is something like that already
> > implemented? Maybe I could do that for my masters thesis instead of the
> > defragmentation task... ;)
> > 
> 
> The F2FS uses technique of distinguishing between hot and cold data very
> deeply. It is a base technique of this filesystem.

Ok so to sum up: The task would be to implement reordering/defragmenting
abilities in the cleaner and before flushing. Additionally one could use
the information from hot data tracking to improve the cleaner like in
F2FS. The defragmenting activities of the cleaner should cause minimal
overhead and no extra writes to prevent reduction of NAND lifetime. An
extra user space utility is probably useless.

I am sorry for the confusion, but with EXT4 and XFS, online
defragmentation is done with a user-space tool. It seems we were talking
about two different things the whole time :). I am glad we cleared that
up.

best regards,
Andreas Rohner

--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Contributing to NILFS
  2012-12-17 10:23           ` Andreas Rohner
@ 2012-12-19  7:13             ` Vyacheslav Dubeyko
  0 siblings, 0 replies; 11+ messages in thread
From: Vyacheslav Dubeyko @ 2012-12-19  7:13 UTC (permalink / raw)
  To: Andreas Rohner; +Cc: linux-nilfs-u79uwXL29TY76Z2rM5mHXA

On Mon, 2012-12-17 at 11:23 +0100, Andreas Rohner wrote:
> Hi,
> 
> > I misunderstand slightly about what implementation you are talking.
> > Could you point out NILFS2 source code that implement this technique? As
> > I understand, if we have implemented data reordering before flush and
> > during the cleaning then it means that we have implemented online
> > defragmenting. But, if so, why this task is in TODO list?
> 
> I guess I just assumed it. But I would connect these issues more with
> the first item on the TODO-List "Smarter and more efficient Garbage
> Collector".
> 

As you mentioned earlier, yes, garbage collection and defragmenting are
tightly related technique, from my point of view. So, GC will be more
smarter and more efficient in the case of defragmenting activity
addition in the garbage collection. Moreover, I think that more
promising technique is a some defragmenting activity during writing in a
new segment. I mean that when we add or modify some blocks of file then
it is possible to write in a new segment additional file's fragments
from old ("dirty") segments. So, we can achieve garbage collection and
defragmenting in the background of write operation. But, as you can see,
such technique can be implemented only on the kernel side. 


> > > Instead I imagined a tool like xfs_fsr for XFS. So the user can decide
> > > when to defragment the file system, by running it manually or with a
> > > cron job.
> > 
> > If you are talking about user-space tool then it means that you are
> > talking about offline defragmenter. I think that offline defragmenter is
> > not so interesting for users. The most important objections are:
> 
> I am sorry about the misunderstanding. I thought the term "online" just
> means that the file system is mounted, while the defragmentation tool is
> running. So offline defragmentation would be if you had to unmount the
> file system for defragmentation. EXT4 [1] and XFS both do "online"
> defragmentation with a user-space tool. I assumed, that the item on the
> TODO-List means something similar. Such a tool could be useful to reduce
> aging effects. It should be very conservative and probably not run every
> day, but instead once a month.
> 
> [1] http://lwn.net/Articles/317787/
> 

Yes, it is possible to implement defragmenting in the user-space
approach. But I think that the current "trend" and end-users
expectations is to integrate such technique (fsck, defragmenting, and so
on) in the internal filesystem technique. So, kernel side implementation
is more promising way, from my point of view. Moreover, kernel side
implementation can provide more infrastructure opportunities and save
significant time for implementation efforts in the case of online
defragmenting task.

And, again, I think that online defragmenting should be useful for HDD
and SSD cases. :-)

With the best regards,
Vyacheslav Dubeyko.


--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2012-12-19  7:13 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-12-10 20:05 Contributing to NILFS Andreas Rohner
2012-12-11  6:46 ` Vyacheslav Dubeyko
2012-12-11 13:54   ` Andreas Rohner
2012-12-12  7:08     ` Vyacheslav Dubeyko
2012-12-12 15:30       ` Sven-Göran Bergh
     [not found]         ` <1355326242.67765.YahooMailNeo-mKBY30tKGRG2Y7dhQGSVAJOW+3bF1jUfVpNB7YpNyf8@public.gmane.org>
2012-12-12 19:57           ` Vyacheslav Dubeyko
     [not found]             ` <706EE260-E8A2-410A-9211-FB4859516478-yeENwD64cLxBDgjK7y7TUQ@public.gmane.org>
2012-12-13 10:59               ` Sven-Göran Bergh
2012-12-16 17:45       ` Andreas Rohner
2012-12-17  6:30         ` Vyacheslav Dubeyko
2012-12-17 10:23           ` Andreas Rohner
2012-12-19  7:13             ` Vyacheslav Dubeyko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.