linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* btrfs support for efficient SSD operation (data blocks alignment)
@ 2012-02-08 19:24 Martin
  2012-02-09  1:42 ` Liu Bo
  2012-02-10 18:18 ` Martin Steigerwald
  0 siblings, 2 replies; 6+ messages in thread
From: Martin @ 2012-02-08 19:24 UTC (permalink / raw)
  To: linux-btrfs

My understanding is that for x86 architecture systems, btrfs only allows
a sector size of 4kB for a HDD/SSD. That is fine for the present HDDs
assuming the partitions are aligned to a 4kB boundary for that device.

However for SSDs...

I'm using for example a 60GByte SSD that has:

    8kB page size;
    16kB logical to physical mapping chunk size;
    2MB erase block size;
    64MB cache.

And the sector size reported to Linux 3.0 is the default 512 bytes!


My first thought is to try formatting with a sector size of 16kB to
align with the SSD logical mapping chunk size. This is to avoid SSD
write amplification. Also, the data transfer performance for that device
is near maximum for writes with a blocksize of 16kB and above. Yet,
btrfs supports a 4kByte page/sector size only at present...


Is there any control possible over the btrfs filesystem structure to map
metadata and data structures to the underlying device boundaries?

For example to maximise performance, can the data chunks and the data
chunk size be aligned to be sympathetic to the SSD logical mapping chunk
size and the erase block size?

What features other than the trim function does btrfs employ to optimise
for SSD operation?


Regards,
Martin



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: btrfs support for efficient SSD operation (data blocks alignment)
  2012-02-08 19:24 btrfs support for efficient SSD operation (data blocks alignment) Martin
@ 2012-02-09  1:42 ` Liu Bo
  2012-02-10  1:05   ` Martin
  2012-02-10 18:18 ` Martin Steigerwald
  1 sibling, 1 reply; 6+ messages in thread
From: Liu Bo @ 2012-02-09  1:42 UTC (permalink / raw)
  To: Martin; +Cc: linux-btrfs

On 02/09/2012 03:24 AM, Martin wrote:
> My understanding is that for x86 architecture systems, btrfs only allows
> a sector size of 4kB for a HDD/SSD. That is fine for the present HDDs
> assuming the partitions are aligned to a 4kB boundary for that device.
> 
> However for SSDs...
> 
> I'm using for example a 60GByte SSD that has:
> 
>     8kB page size;
>     16kB logical to physical mapping chunk size;
>     2MB erase block size;
>     64MB cache.
> 
> And the sector size reported to Linux 3.0 is the default 512 bytes!
> 
> 
> My first thought is to try formatting with a sector size of 16kB to
> align with the SSD logical mapping chunk size. This is to avoid SSD
> write amplification. Also, the data transfer performance for that device
> is near maximum for writes with a blocksize of 16kB and above. Yet,
> btrfs supports a 4kByte page/sector size only at present...
> 
> 
> Is there any control possible over the btrfs filesystem structure to map
> metadata and data structures to the underlying device boundaries?
> 
> For example to maximise performance, can the data chunks and the data
> chunk size be aligned to be sympathetic to the SSD logical mapping chunk
> size and the erase block size?
> 

The metadata buffer size will support size larger than 4K at least, it is on development.

> What features other than the trim function does btrfs employ to optimise
> for SSD operation?
> 

e.g COW(avoid writing to one place multi-times),
delayed allocation(intend to reduce the write frequency)

thanks,
liubo

> 
> Regards,
> Martin
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: btrfs support for efficient SSD operation (data blocks alignment)
  2012-02-09  1:42 ` Liu Bo
@ 2012-02-10  1:05   ` Martin
  0 siblings, 0 replies; 6+ messages in thread
From: Martin @ 2012-02-10  1:05 UTC (permalink / raw)
  To: linux-btrfs

On 09/02/12 01:42, Liu Bo wrote:
> On 02/09/2012 03:24 AM, Martin wrote:

[ No problem for 4kByte sector HDDs. However, for SSDs... ]

>> However for SSDs...
>>
>> I'm using for example a 60GByte SSD that has:
>>
>>     8kB page size;
>>     16kB logical to physical mapping chunk size;
>>     2MB erase block size;
>>     64MB cache.
>>
>> And the sector size reported to Linux 3.0 is the default 512 bytes!
[...]
>> Is there any control possible over the btrfs filesystem structure to map
>> metadata and data structures to the underlying device boundaries?
>>
>> For example to maximise performance, can the data chunks and the data
>> chunk size be aligned to be sympathetic to the SSD logical mapping chunk
>> size and the erase block size?
>>
> 
> The metadata buffer size will support size larger than 4K at least, it is on development.

And also for the data? Also pack smaller data chunks in with the
metadata as is done already but with all the present parameters
proportioned according to the "sector size"?

(For my example, the filesystem may as well use 16kByte sectors because
the SSD firmware will do a read-modify-write for anything smaller.)


>> What features other than the trim function does btrfs employ to optimise
>> for SSD operation?
>>
> 
> e.g COW(avoid writing to one place multi-times),
> delayed allocation(intend to reduce the write frequency)

I'm using ext4 on a SSD web server and have formatted with (for ext4):

mke2fs -v -T ext4 -L fs_label_name -b 4096 -E
stride=4,stripe-width=4,lazy_itable_init=0 -O
none,dir_index,extent,filetype,flex_bg,has_journal,sparse_super,uninit_bg /dev/sdX

and mounted with the mount options:
journal_checksum,barrier,stripe=4,delalloc,commit=300,max_batch_time=15000,min_batch_time=200,discard,noatime,nouser_xattr,noacl,errors=remount-ro

The main bits for the SSD are the:
"stripe=4,delalloc,commit=300,max_batch_time=15000,min_batch_time=200,discard,noatime"

The "-b 4096" is the maximum value allowed. The stride and stripe-width
then take that up to 16kBytes (hopefully...).

(Make sure you're on a good UPS with a reliable shutdown mechanism for
power fail!)


A further thought is:

For my one SSD example, the erase state appears to be all "0xFF"... Can
the fs easily check the erase state value and leave any blank space
unchanged to minimise the bit flipping?

Reasonable to be included?


All unnecessary for HDDs but possibly of use for maintaining the
lifespan of SSDs...

Hope of interest,

Regards,
Martin



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: btrfs support for efficient SSD operation (data blocks alignment)
  2012-02-08 19:24 btrfs support for efficient SSD operation (data blocks alignment) Martin
  2012-02-09  1:42 ` Liu Bo
@ 2012-02-10 18:18 ` Martin Steigerwald
  2012-05-01 17:04   ` Martin
  1 sibling, 1 reply; 6+ messages in thread
From: Martin Steigerwald @ 2012-02-10 18:18 UTC (permalink / raw)
  To: linux-btrfs

Hi Martin,

Am Mittwoch, 8. Februar 2012 schrieb Martin:
> My understanding is that for x86 architecture systems, btrfs only
> allows a sector size of 4kB for a HDD/SSD. That is fine for the
> present HDDs assuming the partitions are aligned to a 4kB boundary fo=
r
> that device.
>=20
> However for SSDs...
>=20
> I'm using for example a 60GByte SSD that has:
>=20
>     8kB page size;
>     16kB logical to physical mapping chunk size;
>     2MB erase block size;
>     64MB cache.
>=20
> And the sector size reported to Linux 3.0 is the default 512 bytes!
>=20
>=20
> My first thought is to try formatting with a sector size of 16kB to
> align with the SSD logical mapping chunk size. This is to avoid SSD
> write amplification. Also, the data transfer performance for that
> device is near maximum for writes with a blocksize of 16kB and above.
> Yet, btrfs supports a 4kByte page/sector size only at present...

Thing is as far as I know the better SSDs and even the dumber ones have=
=20
quite some intelligence in the firmware. And at least for me its not cl=
ear=20
what the firmware of my Intel SSD 320 all does on its own and whether a=
ny=20
of my optimization attempts even matter.

So I am not sure, whether just thinking about one write operation of sa=
y 4=20
KB or 2 KB singularily even may sense. I bet often several processes wr=
ite=20
data at once. So there is more amount of data to write.

What now is not clear to me whether the SSD will combine several write=20
requests into a single mapping chunk or erase block or combine them int=
o=20
the already erased space of an erase block. I would bet at least the=20
better SSDs would do it. So even when from the OS point of view, in a=20
simplistic example, one write of 1 MB goes to LBA 40000 and one write o=
f 1=20
MB to LBA 80000 the SSD might still just use a single erase block and=20
combine the writes next to each other. As far as I understand SSDs do C=
OW=20
to spread writes evenly across erase blocks. As far as I furtherly=20
understand from a seek time point of view the exact location where to p=
ut=20
a write request does not matter at all. So for me for an SSD firmware i=
t=20
looks perfectly sane to combine writes as they see fit. And SSDs that c=
arry=20
condensators, like above mentioned Intel SSD, may even cache writes for=
 a=20
while to wait for further requests.

The article on write amplication on wikipedia gives me a glimpse of the=
=20
complexity involved=B9. Yes, I set stripe-width as well on my Ext4=20
filesystem, but frankly said I am not even sure whether this has any=20
positive effect except of maybe sparing the SSD controller firmware som=
e=20
reshuffling work.

So from my current point of view most of what you wrote IMHO is more=20
important for really dumb flash. Like as I understood some kernel=20
developers really like to see so that most of the logic could be put in=
to=20
the kernel and be easily modifyable: JBOF - just a bunch of flash cells=
=20
with an interface to access them directly. But for now AFAIK most consu=
mer=20
grade SSDs just provide a SATA interface and hide the internals. So an=20
optimization for one kind or one brand of SSDs may not be suitable for=20
another one.

There are PCI express models but these probably aren=B4t dumb either. A=
nd=20
then there is the idea of auto commit memory (ACM) by Fusion-IO which j=
ust=20
makes a part of the virtual address space persistent.

So its a question on where to put the intelligence. For current SSDs is=
=20
seems the intelligence is really near the storage medium and then IMHO =
it=20
makes sense to even reduce the intelligence on the Linux side.

[1] http://en.wikipedia.org/wiki/Write_amplification

Ciao,
--=20
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: btrfs support for efficient SSD operation (data blocks alignment)
  2012-02-10 18:18 ` Martin Steigerwald
@ 2012-05-01 17:04   ` Martin
  2012-05-01 17:20     ` Hubert Kario
  0 siblings, 1 reply; 6+ messages in thread
From: Martin @ 2012-05-01 17:04 UTC (permalink / raw)
  To: linux-btrfs

Looking at this again from some time ago...

Brief summary:

There is a LOT of nefarious cleverness being attempted by SSD
manufacturers to accommodate a 4kByte block size. Get that wrong, or
just be unsympathetic to that 'cleverness', and you suffer performance
degradation and/or premature device wear.

Is that significant? Very likely it will be for the new three-bit FLASH
devices that have a PE (program-erase) lifespan of only 1000 or so
cycles per cell.

A better question is whether the filesystem can be easily made to be
more sympathetic to all SSDs?


=46rom my investigating, there appears to be a sweet spot for performan=
ce
for writing (aligned) 16kByte blocks.

TRIM and keeping the device non-full also helps greatly.

I suspect that consecutive writes, as is the case for HDDs, also helps
performance to a lesser degree.


The erased state for SSDs appears to be either all 0xFF or all 0x00
(I've got examples of both). Can that be automatically detected and use=
d
by btrfs so as to minimise write cycling the bits for (unused) padded a=
reas?

Are 16kByte blocks/sectors useful to btrfs?

Or rather, can btrfs usefully use 16kByte blocks?

Can that be supported?



=46urther detail...

Some good comments:

On 10/02/12 18:18, Martin Steigerwald wrote:
> Hi Martin,
>=20
> Am Mittwoch, 8. Februar 2012 schrieb Martin:
>> My understanding is that for x86 architecture systems, btrfs only
>> allows a sector size of 4kB for a HDD/SSD. That is fine for the
>> present HDDs assuming the partitions are aligned to a 4kB boundary f=
or
>> that device.
>>
>> However for SSDs...
>>
>> I'm using for example a 60GByte SSD that has:
>>
>>     8kB page size;
>>     16kB logical to physical mapping chunk size;
>>     2MB erase block size;
>>     64MB cache.
>>
>> And the sector size reported to Linux 3.0 is the default 512 bytes!
>>
>>
>> My first thought is to try formatting with a sector size of 16kB to
>> align with the SSD logical mapping chunk size. This is to avoid SSD
>> write amplification. Also, the data transfer performance for that
>> device is near maximum for writes with a blocksize of 16kB and above=
=2E
>> Yet, btrfs supports a 4kByte page/sector size only at present...
>=20
> Thing is as far as I know the better SSDs and even the dumber ones ha=
ve=20
> quite some intelligence in the firmware. And at least for me its not =
clear=20
> what the firmware of my Intel SSD 320 all does on its own and whether=
 any=20
> of my optimization attempts even matter.

[...]

> The article on write amplication on wikipedia gives me a glimpse of t=
he=20
> complexity involved=B9. Yes, I set stripe-width as well on my Ext4=20
> filesystem, but frankly said I am not even sure whether this has any=20
> positive effect except of maybe sparing the SSD controller firmware s=
ome=20
> reshuffling work.
>=20
> So from my current point of view most of what you wrote IMHO is more=20
> important for really dumb flash. ...

[...]

> grade SSDs just provide a SATA interface and hide the internals. So a=
n=20
> optimization for one kind or one brand of SSDs may not be suitable fo=
r=20
> another one.
>=20
> There are PCI express models but these probably aren=B4t dumb either.=
 And=20
> then there is the idea of auto commit memory (ACM) by Fusion-IO which=
 just=20
> makes a part of the virtual address space persistent.
>=20
> So its a question on where to put the intelligence. For current SSDs =
is=20
> seems the intelligence is really near the storage medium and then IMH=
O it=20
> makes sense to even reduce the intelligence on the Linux side.
>=20
> [1] http://en.wikipedia.org/wiki/Write_amplification


As an engineer, I have a deep mistrust of the phrase "Trust me" or of
"Magic" or "Proprietary, secret" or "Proprietary, keep out!".

Anand at Anandtech has produced some good articles on some of what goes
on inside SSDs and some of the consequences. If you want a good long re=
ad:

The SSD Relapse: Understanding and Choosing the Best SSD
http://www.anandtech.com/print/2829

Covers block allocation and write amplification and the effect of free
space on the write amplification factor.


=2E.. The Fastest MLC SSD We've Ever Tested
http://www.anandtech.com/print/2899

Details the Sandforce controller at that time and its use of data
compression on the controller. The latest Sandforce controllers also
utilise data deduplication on the SSD!


OCZ Agility 3 (240GB) Review
http://www.anandtech.com/print/4346

Shows an example set of Performance vs Transfer Size graphs.


=46lashy fists fly as OCZ and DDRdrive row over SSD performance
http://www.theregister.co.uk/2011/01/14/ocz_and_ddrdrive_performance_ro=
w/

Shows an old and unfair comparison highlighting SSD performance
degradation due to write amplification for 4kByte random writes on a
full device.



A bit of a "Joker" in the pack are the SSDs that implement their own
controller-level data compression and data deduplication (all
proprietary and secret...). Ofcourse, that is all useless for encrypted
filesystems... Also, what does the controller based data compression do
for aligning to the underlying device blocks?


What is apparent from all that lot is that 4kBytes is a bit of a
headache for SSDs. Perhaps we should all move to a more sympathetic
aligned 16kBytes or 32kBytes?

What's the latest state of play with btrfs for selecting a sector size
of say 16kBytes?

Regards,
Martin



--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: btrfs support for efficient SSD operation (data blocks alignment)
  2012-05-01 17:04   ` Martin
@ 2012-05-01 17:20     ` Hubert Kario
  0 siblings, 0 replies; 6+ messages in thread
From: Hubert Kario @ 2012-05-01 17:20 UTC (permalink / raw)
  To: Martin; +Cc: linux-btrfs

On Tuesday 01 of May 2012 18:04:25 Martin wrote:
> Are 16kByte blocks/sectors useful to btrfs?
>=20
> Or rather, can btrfs usefully use 16kByte blocks?

Yes, and they are already supported using -l and -n flags:

mkfs.btrfs -l $((4*4096)) -n $((4*4096)) /dev/sda1

You can set sector size to 16kb but this requires 16kb memory pages.

Regards,
--=20
Hubert Kario
QBS - Quality Business Software
02-656 Warszawa, ul. Ksawer=F3w 30/85
tel. +48 (22) 646-61-51, 646-74-24
www.qbs.com.pl
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2012-05-01 17:20 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-02-08 19:24 btrfs support for efficient SSD operation (data blocks alignment) Martin
2012-02-09  1:42 ` Liu Bo
2012-02-10  1:05   ` Martin
2012-02-10 18:18 ` Martin Steigerwald
2012-05-01 17:04   ` Martin
2012-05-01 17:20     ` Hubert Kario

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).