All of lore.kernel.org
 help / color / mirror / Atom feed
* Maximum file system size of XFS?
@ 2013-03-09 20:51 Pascal
  2013-03-09 22:29 ` Ric Wheeler
  2013-03-11 21:45 ` Martin Steigerwald
  0 siblings, 2 replies; 16+ messages in thread
From: Pascal @ 2013-03-09 20:51 UTC (permalink / raw)
  To: linux-xfs

Hello,

I am asking you because I am insecure about the correct answer and
different sources give me different numbers.


My question is: What is the maximum file system size of XFS?

The official page says: 2^63 = 9 x 10^18 = 9 exabytes
Source: http://oss.sgi.com/projects/xfs/

Wikipedia says 16 exabytes.
Source: https://en.wikipedia.org/wiki/XFS

Another reference books says 8 exabytes (2^63).


Can anyone tell me and explain what is the maximum file system size for
XFS?


Thank you in advance!

Pascal

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Maximum file system size of XFS?
  2013-03-09 20:51 Maximum file system size of XFS? Pascal
@ 2013-03-09 22:29 ` Ric Wheeler
  2013-03-09 22:39   ` Pascal
  2013-03-11 21:45 ` Martin Steigerwald
  1 sibling, 1 reply; 16+ messages in thread
From: Ric Wheeler @ 2013-03-09 22:29 UTC (permalink / raw)
  To: Pascal; +Cc: linux-xfs

On 03/09/2013 03:51 PM, Pascal wrote:
> Hello,
>
> I am asking you because I am insecure about the correct answer and
> different sources give me different numbers.
>
>
> My question is: What is the maximum file system size of XFS?
>
> The official page says: 2^63 = 9 x 10^18 = 9 exabytes
> Source: http://oss.sgi.com/projects/xfs/
>
> Wikipedia says 16 exabytes.
> Source: https://en.wikipedia.org/wiki/XFS
>
> Another reference books says 8 exabytes (2^63).
>
>
> Can anyone tell me and explain what is the maximum file system size for
> XFS?
>
>
> Thank you in advance!
>
> Pascal
>

The maximum size that XFS can address (which is what most people post in things 
like wikipedia) is kind of a fantasy number.

What is a better question is what is the maximum size XFS file system people 
have in production (even better, people who have your same work load). Lots and 
lots of tiny files are more challenging than very large video files for example.

I think that you can easily find people with 100's of terabytes in production 
use. For Red Hat, we support production use of 100TB per XFS instance in RHEL6 
for example since that is what we test at (and have been know to officially 
support larger instances by exception).

Some of the things to watch out for in very large file systems is how much DRAM 
you have in the server. If you ever need to xfs_repair a 1PB file system, you 
will need a very beefy box :)

Ric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Maximum file system size of XFS?
  2013-03-09 22:29 ` Ric Wheeler
@ 2013-03-09 22:39   ` Pascal
  2013-03-10  1:10     ` Eric Sandeen
  2013-03-11  1:55     ` Dave Chinner
  0 siblings, 2 replies; 16+ messages in thread
From: Pascal @ 2013-03-09 22:39 UTC (permalink / raw)
  To: xfs

Am Sat, 09 Mar 2013 17:29:23 -0500
schrieb Ric Wheeler <rwheeler@redhat.com>:

> On 03/09/2013 03:51 PM, Pascal wrote:
> > Hello,
> >
> > I am asking you because I am insecure about the correct answer and
> > different sources give me different numbers.
> >
> >
> > My question is: What is the maximum file system size of XFS?
> >
> > The official page says: 2^63 = 9 x 10^18 = 9 exabytes
> > Source: http://oss.sgi.com/projects/xfs/
> >
> > Wikipedia says 16 exabytes.
> > Source: https://en.wikipedia.org/wiki/XFS
> >
> > Another reference books says 8 exabytes (2^63).
> >
> >
> > Can anyone tell me and explain what is the maximum file system size
> > for XFS?
> >
> >
> > Thank you in advance!
> >
> > Pascal
> >
> 
> The maximum size that XFS can address (which is what most people post
> in things like wikipedia) is kind of a fantasy number.
> 
> What is a better question is what is the maximum size XFS file system
> people have in production (even better, people who have your same
> work load). Lots and lots of tiny files are more challenging than
> very large video files for example.
> 
> I think that you can easily find people with 100's of terabytes in
> production use. For Red Hat, we support production use of 100TB per
> XFS instance in RHEL6 for example since that is what we test at (and
> have been know to officially support larger instances by exception).
> 
> Some of the things to watch out for in very large file systems is how
> much DRAM you have in the server. If you ever need to xfs_repair a
> 1PB file system, you will need a very beefy box :)
> 
> Ric
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
> 

Hello Ric,

thank you for your answer. I am aware that there is a difference
between the maximum size under practical conditions and the theoretical
maximum. But I am looking for this theoretical number to use in within
in my thesis comparing file systems.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Maximum file system size of XFS?
  2013-03-09 22:39   ` Pascal
@ 2013-03-10  1:10     ` Eric Sandeen
  2013-03-10  7:54       ` Stan Hoeppner
  2013-03-11  1:55     ` Dave Chinner
  1 sibling, 1 reply; 16+ messages in thread
From: Eric Sandeen @ 2013-03-10  1:10 UTC (permalink / raw)
  To: Pascal; +Cc: xfs

On 3/9/13 4:39 PM, Pascal wrote:

> Hello Ric,
> 
> thank you for your answer. I am aware that there is a difference
> between the maximum size under practical conditions and the theoretical
> maximum. But I am looking for this theoretical number to use in within
> in my thesis comparing file systems.

A thesis comparing actual scalability would be much more interesting
than one comparing, essentially, the container size chosen for a disk
block.  One could quickly write a filesystem which "can" be as large
as a yottabyte, but it wouldn't really *mean* anything.

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Maximum file system size of XFS?
  2013-03-10  1:10     ` Eric Sandeen
@ 2013-03-10  7:54       ` Stan Hoeppner
  2013-03-11 11:02         ` Stan Hoeppner
  0 siblings, 1 reply; 16+ messages in thread
From: Stan Hoeppner @ 2013-03-10  7:54 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs, Pascal

On 3/9/2013 7:10 PM, Eric Sandeen wrote:
> On 3/9/13 4:39 PM, Pascal wrote:
> 
>> Hello Ric,
>>
>> thank you for your answer. I am aware that there is a difference
>> between the maximum size under practical conditions and the theoretical
>> maximum. But I am looking for this theoretical number to use in within
>> in my thesis comparing file systems.
> 
> A thesis comparing actual scalability would be much more interesting
> than one comparing, essentially, the container size chosen for a disk
> block.  One could quickly write a filesystem which "can" be as large
> as a yottabyte, but it wouldn't really *mean* anything.

Agreed.  But if the OP must have the theoretical maximum, I think what's
in the SGI doc is the correct number.  Down below what the OP quoted
from the Features section, down in the Technical Specifications, we find:

" Maximum Filesystem Size

For Linux 2.4, 2 TB. For Linux 2.6 and beyond, when using 64 bit
addressing in the block devices layer (CONFIG_LBD) and a 64 bit
platform, filesystem size limit increases to 9 million terabytes (or the
device limits). For these later kernels on 32 bit platforms, 16TB is the
current limit even with 64 bit addressing enabled in the block layer."


I assume the OP's paper deals with the far distant future where
individual rusty disk drives have 1PB capacity, thus requiring 'only'
9,000 disk drives for a RAW 9EB XFS without redundancy, or 18,000 drives
for RAID10.  With today's largest drives at 4TB, it would take 2.25
million disk drives for a RAW 9EB capacity, 4.5 million for RAID10.  All
of this assuming my math is correct.  I don't regularly deal with 16
digit decimal numbers. ;)  I'm also assuming in this distant future that
rusty drives still lead SSD in price/capacity.  That may be an incorrect
assumption.  Dave can beat up on me in a couple of decades if my
assumption proves incorrect. ;)

For a 9EB XFS to become remotely practical, I'd say disk drive capacity
would have to reach 10 petabytes per drive.  This yields 1800 drives for
9EB in RAID10, or 3x 42U racks each housing 10x 4U 60 drive FC RAID
chassis, 600 drives per rack.  I keep saying RAID10 instead of RAID6
because I don't think anyone would want to attempt a RAID6 parity
rebuild of even a small 4+2 array of 10PB drives, if the sustained
interface rate continues to increase at the snails pace it has in
relation to aerial density.  Peak interface sustained data rate today is
about 200MB/s for the fastest rusty drives.  If we are lucky the 10PB
drives of the future will have a sustained interface rate of 20GB/s, or
100x today's fastest, which will allow for a mirroring operation to
complete in about 14 hours, which is still slower than with today's 4TB
drives, which take about 8 hours.

Note that a 20GB/s one way data rate of such a 10PB drive would saturate
a 16 lane PCI Express v3.0 slot (15GB/s), and eat 2/3rds of a v4.0 x16
slot's bandwidth (31GB/s, but won't ship until ~2016).  And since
current PCIe controller to processor interconnects are limited to about
12-20GB/s one way, PCIe b/w doesn't matter.  Thus, the throughput of our
our peripheral and system level interconnects much increase many fold as
well to facilitate the hardware that would enable an EB sized XFS.

And as Ric mentioned, the memory capacity requirements for executing
xfs_repair on a 9EB XFS would likely require a host machine with many
hundreds of times the memory capacity of system available today.  That
and/or a rewrite of xfs_repair to make more efficient use of RAM.

So in summary, an Exabyte scale XFS is simply not practical today, and
won't be for at least another couple of decades, or more, if ever.  The
same holds true for some of the other filesystems you're going to be
writing about.  Some of the cluster and/or distributed filesystems
you're looking at could probably scale to Exabytes today.  That is, if
someone had the budget for half a million hard drives, host systems,
switches, etc, the facilities to house it all, and the budget for power
and cooling.  That's 834 racks for drives alone, just under 1/3rd of a
mile long if installed in a single row.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Maximum file system size of XFS?
  2013-03-09 22:39   ` Pascal
  2013-03-10  1:10     ` Eric Sandeen
@ 2013-03-11  1:55     ` Dave Chinner
  1 sibling, 0 replies; 16+ messages in thread
From: Dave Chinner @ 2013-03-11  1:55 UTC (permalink / raw)
  To: Pascal; +Cc: xfs

On Sat, Mar 09, 2013 at 11:39:40PM +0100, Pascal wrote:
> thank you for your answer. I am aware that there is a difference
> between the maximum size under practical conditions and the theoretical
> maximum. But I am looking for this theoretical number to use in within
> in my thesis comparing file systems.

Out of curiousity, what apsect of file systems are you comparing?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Maximum file system size of XFS?
  2013-03-10  7:54       ` Stan Hoeppner
@ 2013-03-11 11:02         ` Stan Hoeppner
  2013-03-11 16:15           ` Hans-Peter Jansen
  0 siblings, 1 reply; 16+ messages in thread
From: Stan Hoeppner @ 2013-03-11 11:02 UTC (permalink / raw)
  To: stan; +Cc: Pascal, Eric Sandeen, xfs

On 3/10/2013 1:54 AM, Stan Hoeppner wrote:

> So in summary, an Exabyte scale XFS is simply not practical today, and
> won't be for at least another couple of decades, or more, if ever.  The
> same holds true for some of the other filesystems you're going to be
> writing about.  Some of the cluster and/or distributed filesystems
> you're looking at could probably scale to Exabytes today.  That is, if
> someone had the budget for half a million hard drives, host systems,
> switches, etc, the facilities to house it all, and the budget for power
> and cooling.  That's 834 racks for drives alone, just under 1/3rd of a
> mile long if installed in a single row.

Jet lag due to time travel caused a math error above.  With today's 4TB
drives it would require 2.25 million units for a raw 9EB capacity.
That's 3,750 racks of 600 drives each.  These would stretch 1.42 miles,
7500 ft.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Maximum file system size of XFS?
  2013-03-11 11:02         ` Stan Hoeppner
@ 2013-03-11 16:15           ` Hans-Peter Jansen
  2013-03-11 16:22             ` Emmanuel Florac
  0 siblings, 1 reply; 16+ messages in thread
From: Hans-Peter Jansen @ 2013-03-11 16:15 UTC (permalink / raw)
  To: xfs, stan; +Cc: Eric Sandeen, Pascal

Am Montag, 11. März 2013, 06:02:26 schrieb Stan Hoeppner:
> On 3/10/2013 1:54 AM, Stan Hoeppner wrote:
> > So in summary, an Exabyte scale XFS is simply not practical today, and
> > won't be for at least another couple of decades, or more, if ever.  The
> > same holds true for some of the other filesystems you're going to be
> > writing about.  Some of the cluster and/or distributed filesystems
> > you're looking at could probably scale to Exabytes today.  That is, if
> > someone had the budget for half a million hard drives, host systems,
> > switches, etc, the facilities to house it all, and the budget for power
> > and cooling.  That's 834 racks for drives alone, just under 1/3rd of a
> > mile long if installed in a single row.
> 
> Jet lag due to time travel caused a math error above.  With today's 4TB
> drives it would require 2.25 million units for a raw 9EB capacity.
> That's 3,750 racks of 600 drives each.  These would stretch 1.42 miles,
> 7500 ft.

And I just acknowledged the building plans for our new datacenter, based on 
your former calculations. The question is, who carries the costs of the now 
needed 4 other floors of that building.. 

Are you well-insured, Stan?

Cheers,
Pete

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Maximum file system size of XFS?
  2013-03-11 16:15           ` Hans-Peter Jansen
@ 2013-03-11 16:22             ` Emmanuel Florac
  0 siblings, 0 replies; 16+ messages in thread
From: Emmanuel Florac @ 2013-03-11 16:22 UTC (permalink / raw)
  To: Hans-Peter Jansen; +Cc: Pascal, Eric Sandeen, stan, xfs

Le Mon, 11 Mar 2013 17:15:08 +0100
Hans-Peter Jansen <hpj@urpla.net> écrivait:

> And I just acknowledged the building plans for our new datacenter,
> based on your former calculations.

Don't be afraid, there are 80 drives 4U chassis available, and 5TB
drives are around the corner. That's 800 drives and a raw capacity of
4 PB per 42U rack.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Maximum file system size of XFS?
  2013-03-09 20:51 Maximum file system size of XFS? Pascal
  2013-03-09 22:29 ` Ric Wheeler
@ 2013-03-11 21:45 ` Martin Steigerwald
  2013-03-11 21:57   ` Martin Steigerwald
  2013-03-11 22:10   ` Martin Steigerwald
  1 sibling, 2 replies; 16+ messages in thread
From: Martin Steigerwald @ 2013-03-11 21:45 UTC (permalink / raw)
  To: xfs; +Cc: Pascal

Am Samstag, 9. März 2013 schrieb Pascal:
> Hello,

Hi Pascal,

> I am asking you because I am insecure about the correct answer and
> different sources give me different numbers.
> 
> 
> My question is: What is the maximum file system size of XFS?
> 
> The official page says: 2^63 = 9 x 10^18 = 9 exabytes
> Source: http://oss.sgi.com/projects/xfs/
> 
> Wikipedia says 16 exabytes.
> Source: https://en.wikipedia.org/wiki/XFS
> 
> Another reference books says 8 exabytes (2^63).
> 
> 
> Can anyone tell me and explain what is the maximum file system size for
> XFS?

You can test it. The theoretical limit. Whether such a filesystem will work 
nicely with a real workload is, as pointed out, a different question.

1) Use a big enough XFS filesystem (yes, it has to be XFS for anything else 
that can carry a exabyte big sparse file)

merkaba:~> LANG=C mkfs.xfs -L justcrazy /dev/merkaba/zeit
meta-data=/dev/merkaba/zeit      isize=256    agcount=4, agsize=1310720 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=5242880, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


2) Create a insanely big sparse file

merkaba:~> truncate -s1E /mnt/zeit/evenmorecrazy.img
merkaba:~> ls -lh /mnt/zeit/evenmorecrazy.img
-rw-r--r-- 1 root root 1,0E Mär 11 22:37 /mnt/zeit/evenmorecrazy.img

(No, this won´t work with Ext4.)


3) Make a XFS file system into it:

merkaba:~> mkfs.xfs -L /mnt/zeit/evenmorecrazy.img

I won´t today. I tried that for gag during a linux performance and analysis 
training I held on a ThinkPad T520 with Sandybridge i5 2,50 GhZ, Intel SSD 
320 on an about 20 GiB XFS filesystem.

The mkfs command run for something like one or two hours. It was using quite 
some CPU and quite some SSD, but did not max out one of it.

The host XFS filesystem was almost full, so the image took just about those 
20 GiB.


4) Mount it and enjoy the output of df -hT.


5) Write to if it you dare. I did it, until the Linux kernel told something 
about "lost buffer writes". What I found strange is, that the dd writing to 
the 1E filesystem did not quit then with input/output error. It just ran on.


I didn´t test this with any larger size, but if size and time usage scales 
linearily it might be possible to create a 10EiB filesystem within 200 GiB 
host XFS and hum about a day of waiting :).

No, I do not suggest to use anything even just remotely like this in 
production.

And no, my test didn´t show that an 1EiB filesystem will work nicely with 
any real life workload.

Am I crazy for trying this? I might be :)

Thanks,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Maximum file system size of XFS?
  2013-03-11 21:45 ` Martin Steigerwald
@ 2013-03-11 21:57   ` Martin Steigerwald
  2013-03-11 22:01     ` Martin Steigerwald
  2013-03-11 22:19     ` Dave Chinner
  2013-03-11 22:10   ` Martin Steigerwald
  1 sibling, 2 replies; 16+ messages in thread
From: Martin Steigerwald @ 2013-03-11 21:57 UTC (permalink / raw)
  To: xfs; +Cc: Pascal

Am Montag, 11. März 2013 schrieb Martin Steigerwald:
> 2) Create a insanely big sparse file
> 
> merkaba:~> truncate -s1E /mnt/zeit/evenmorecrazy.img
> merkaba:~> ls -lh /mnt/zeit/evenmorecrazy.img
> -rw-r--r-- 1 root root 1,0E Mär 11 22:37 /mnt/zeit/evenmorecrazy.img
> 
> (No, this won´t work with Ext4.)

Okay, you can´t go beyond 8 EiB for a single file which is about what I have 
read somewhere:

merkaba:/mnt/zeit> ls -lh
insgesamt 0
-rw-r--r-- 1 root root 1,0E Mär 11 22:37 evenmorecrazy.img
merkaba:/mnt/zeit> truncate -s2E /mnt/zeit/evenmorecrazy.img
merkaba:/mnt/zeit> truncate -s3E /mnt/zeit/evenmorecrazy.img
merkaba:/mnt/zeit> truncate -s4E /mnt/zeit/evenmorecrazy.img
merkaba:/mnt/zeit> truncate -s5E /mnt/zeit/evenmorecrazy.img
merkaba:/mnt/zeit> truncate -s6E /mnt/zeit/evenmorecrazy.img
merkaba:/mnt/zeit> truncate -s7E /mnt/zeit/evenmorecrazy.img
merkaba:/mnt/zeit> LANG=C truncate -s8E /mnt/zeit/evenmorecrazy.img
truncate: invalid number '8E': Value too large for defined data type
merkaba:/mnt/zeit#1> ls -lh  
insgesamt 0
-rw-r--r-- 1 root root 7,0E Mär 11 22:49 evenmorecrazy.img

So so tests stops there, until you concatenate two of those files with LVM 
or SoftRAID 0 (if that works). Like this (I just had to try it):


merkaba:/mnt/zeit> ls -lh                                           
insgesamt 0
-rw-r--r-- 1 root root 7,0E Mär 11 22:49 evenmorecrazy.img
-rw-r--r-- 1 root root 7,0E Mär 11 22:52 evenmorecrazy.img2
merkaba:/mnt/zeit> losetup /dev/loop0 evenmorecrazy.img
merkaba:/mnt/zeit> losetup /dev/loop1 evenmorecrazy.img2

merkaba:/mnt/zeit#5> pvcreate /dev/loop0       
  Physical volume "/dev/loop0" successfully created
merkaba:/mnt/zeit> pvcreate /dev/loop1
  Physical volume "/dev/loop1" successfully created
merkaba:/mnt/zeit> vgcreate justinsane /dev/loop0 /dev/loop1
  PV /dev/loop0 too large for extent size 4,00 MiB.
  Format-specific setup of physical volume '/dev/loop0' failed.
  Unable to add physical volume '/dev/loop0' to volume group 'justinsane'.


merkaba:/mnt/zeit#5> vgcreate --physicalextentsize 16M  justinsane 
/dev/loop0 /dev/loop1
  PV /dev/loop0 too large for extent size 16,00 MiB.
  Format-specific setup of physical volume '/dev/loop0' failed.
  Unable to add physical volume '/dev/loop0' to volume group 'justinsane'.

merkaba:/mnt/zeit#5> vgcreate --physicalextentsize 128M  justinsane 
/dev/loop0 /dev/loop1
  PV /dev/loop0 too large for extent size 128,00 MiB.
  Format-specific setup of physical volume '/dev/loop0' failed.
  Unable to add physical volume '/dev/loop0' to volume group 'justinsane'.

merkaba:/mnt/zeit#5> vgcreate --physicalextentsize 1G  justinsane /dev/loop0 
/dev/loop1
  PV /dev/loop0 too large for extent size 1,00 GiB.
  Format-specific setup of physical volume '/dev/loop0' failed.
  Unable to add physical volume '/dev/loop0' to volume group 'justinsane'.

merkaba:/mnt/zeit#5> vgcreate --physicalextentsize 4G  justinsane /dev/loop0 
/dev/loop1
  Volume group "justinsane" successfully created

merkaba:/mnt/zeit> vgs                                                               
  VG         #PV #LV #SN Attr   VSize   VFree 
  justinsane   2   0   0 wz--n-  14,00e 14,00e
  merkaba      1   4   0 wz--n- 278,99g  4,85g
merkaba:/mnt/zeit>

merkaba:/mnt/zeit> vgdisplay justinsane
  --- Volume group ---
  VG Name               justinsane
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               14,00 EiB
  PE Size               4,00 GiB
  Total PE              3758096382
  Alloc PE / Size       0 / 0   
  Free  PE / Size       3758096382 / 14,00 EiB
  VG UUID               z8JP5s-lfRw-uKo8-DXAP-XWGe-aKra-xug9Nn


Enough insanity for today :)

I won´t mkfs.xfs on it, the 20 GiB of the just filesystem wouldn´t be 
enough.

Thanks,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Maximum file system size of XFS?
  2013-03-11 21:57   ` Martin Steigerwald
@ 2013-03-11 22:01     ` Martin Steigerwald
  2013-03-11 22:04       ` Martin Steigerwald
  2013-03-11 22:19     ` Dave Chinner
  1 sibling, 1 reply; 16+ messages in thread
From: Martin Steigerwald @ 2013-03-11 22:01 UTC (permalink / raw)
  To: xfs; +Cc: Pascal

Am Montag, 11. März 2013 schrieb Martin Steigerwald:
> Am Montag, 11. März 2013 schrieb Martin Steigerwald:
> > 2) Create a insanely big sparse file
> > 
> > merkaba:~> truncate -s1E /mnt/zeit/evenmorecrazy.img
> > merkaba:~> ls -lh /mnt/zeit/evenmorecrazy.img
> > -rw-r--r-- 1 root root 1,0E Mär 11 22:37 /mnt/zeit/evenmorecrazy.img
> > 
> > (No, this won´t work with Ext4.)
> 
> Okay, you can´t go beyond 8 EiB for a single file which is about what I
> have read somewhere:
[…]
> merkaba:/mnt/zeit> truncate -s7E /mnt/zeit/evenmorecrazy.img
> merkaba:/mnt/zeit> LANG=C truncate -s8E /mnt/zeit/evenmorecrazy.img
> truncate: invalid number '8E': Value too large for defined data type
> merkaba:/mnt/zeit#1> ls -lh
> insgesamt 0
> -rw-r--r-- 1 root root 7,0E Mär 11 22:49 evenmorecrazy.img
> 
> So so tests stops there, until you concatenate two of those files with
> LVM or SoftRAID 0 (if that works). Like this (I just had to try it):
> 
> 
> merkaba:/mnt/zeit> ls -lh
> insgesamt 0
> -rw-r--r-- 1 root root 7,0E Mär 11 22:49 evenmorecrazy.img
> -rw-r--r-- 1 root root 7,0E Mär 11 22:52 evenmorecrazy.img2
> merkaba:/mnt/zeit> losetup /dev/loop0 evenmorecrazy.img
> merkaba:/mnt/zeit> losetup /dev/loop1 evenmorecrazy.img2
> 
> merkaba:/mnt/zeit#5> pvcreate /dev/loop0
>   Physical volume "/dev/loop0" successfully created
> merkaba:/mnt/zeit> pvcreate /dev/loop1
>   Physical volume "/dev/loop1" successfully created
> merkaba:/mnt/zeit> vgcreate justinsane /dev/loop0 /dev/loop1
>   PV /dev/loop0 too large for extent size 4,00 MiB.
>   Format-specific setup of physical volume '/dev/loop0' failed.
>   Unable to add physical volume '/dev/loop0' to volume group
> 'justinsane'.
[…]
> merkaba:/mnt/zeit#5> vgcreate --physicalextentsize 4G  justinsane
> /dev/loop0 /dev/loop1
>   Volume group "justinsane" successfully created
> 
> merkaba:/mnt/zeit> vgs
>   VG         #PV #LV #SN Attr   VSize   VFree
>   justinsane   2   0   0 wz--n-  14,00e 14,00e
>   merkaba      1   4   0 wz--n- 278,99g  4,85g
> merkaba:/mnt/zeit>
> 
> merkaba:/mnt/zeit> vgdisplay justinsane
[…]
> Enough insanity for today :)

Not quite:

> I won´t mkfs.xfs on it, the 20 GiB of the just filesystem wouldn´t be
> enough.

Ok, there seems to be another limit involved:

merkaba:/mnt/zeit> lvcreate -n yourbiggiexfs -L14E justinsane
  Volume group "justinsane" has insufficient free space (3758096382 
extents): 3758096384 required.
merkaba:/mnt/zeit#5> LANG=C lvcreate -n yourbiggiexfs -L14E justinsane
  Volume group "justinsane" has insufficient free space (3758096382 
extents): 3758096384 required.
merkaba:/mnt/zeit#5> LANG=C lvcreate -n yourbiggiexfs -L13E justinsane
  /dev/justinsane/yourbiggiexfs: lseek 0 failed: Invalid argument
  /dev/justinsane/yourbiggiexfs: lseek 0 failed: Invalid argument
  Logical volume "yourbiggiexfs" created
merkaba:/mnt/zeit> LANG=C lvs                                       
  /dev/justinsane/yourbiggiexfs: lseek 14987979559888945152 failed: Invalid 
argument
  /dev/justinsane/yourbiggiexfs: lseek 14987979559889002496 failed: Invalid 
argument
  /dev/justinsane/yourbiggiexfs: lseek 0 failed: Invalid argument
  /dev/justinsane/yourbiggiexfs: lseek 4096 failed: Invalid argument
  /dev/justinsane/yourbiggiexfs: lseek 0 failed: Invalid argument
  LV            VG         Attr      LSize   Pool Origin Data%  Move Log 
Copy%  Convert
  yourbiggiexfs justinsane -wi-a----  13.00e 

So testing with 9 EiB might become some issue :)

Thanks,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Maximum file system size of XFS?
  2013-03-11 22:01     ` Martin Steigerwald
@ 2013-03-11 22:04       ` Martin Steigerwald
  2013-03-20 18:26         ` Pascal
  0 siblings, 1 reply; 16+ messages in thread
From: Martin Steigerwald @ 2013-03-11 22:04 UTC (permalink / raw)
  To: xfs

Am Montag, 11. März 2013 schrieb Martin Steigerwald:
> Am Montag, 11. März 2013 schrieb Martin Steigerwald:
> > Am Montag, 11. März 2013 schrieb Martin Steigerwald:
> > > 2) Create a insanely big sparse file
> > > 
> > > merkaba:~> truncate -s1E /mnt/zeit/evenmorecrazy.img
> > > merkaba:~> ls -lh /mnt/zeit/evenmorecrazy.img
> > > -rw-r--r-- 1 root root 1,0E Mär 11 22:37 /mnt/zeit/evenmorecrazy.img
> > > 
> > > (No, this won´t work with Ext4.)
> > 
> > Okay, you can´t go beyond 8 EiB for a single file which is about what I
> 
> > have read somewhere:
> […]
> 
> > merkaba:/mnt/zeit> truncate -s7E /mnt/zeit/evenmorecrazy.img
> > merkaba:/mnt/zeit> LANG=C truncate -s8E /mnt/zeit/evenmorecrazy.img
> > truncate: invalid number '8E': Value too large for defined data type
> > merkaba:/mnt/zeit#1> ls -lh
> > insgesamt 0
> > -rw-r--r-- 1 root root 7,0E Mär 11 22:49 evenmorecrazy.img
> > 
> > So so tests stops there, until you concatenate two of those files with
> > LVM or SoftRAID 0 (if that works). Like this (I just had to try it):
> > 
> > 
> > merkaba:/mnt/zeit> ls -lh
> > insgesamt 0
> > -rw-r--r-- 1 root root 7,0E Mär 11 22:49 evenmorecrazy.img
> > -rw-r--r-- 1 root root 7,0E Mär 11 22:52 evenmorecrazy.img2
> > merkaba:/mnt/zeit> losetup /dev/loop0 evenmorecrazy.img
> > merkaba:/mnt/zeit> losetup /dev/loop1 evenmorecrazy.img2
> > 
> > merkaba:/mnt/zeit#5> pvcreate /dev/loop0
> > 
> >   Physical volume "/dev/loop0" successfully created
> > 
> > merkaba:/mnt/zeit> pvcreate /dev/loop1
> > 
> >   Physical volume "/dev/loop1" successfully created
> > 
> > merkaba:/mnt/zeit> vgcreate justinsane /dev/loop0 /dev/loop1
> > 
> >   PV /dev/loop0 too large for extent size 4,00 MiB.
> >   Format-specific setup of physical volume '/dev/loop0' failed.
> >   Unable to add physical volume '/dev/loop0' to volume group
> > 
> > 'justinsane'.
> 
> […]
> 
> > merkaba:/mnt/zeit#5> vgcreate --physicalextentsize 4G  justinsane
> > /dev/loop0 /dev/loop1
> > 
> >   Volume group "justinsane" successfully created
> > 
> > merkaba:/mnt/zeit> vgs
> > 
> >   VG         #PV #LV #SN Attr   VSize   VFree
> >   justinsane   2   0   0 wz--n-  14,00e 14,00e
> >   merkaba      1   4   0 wz--n- 278,99g  4,85g
> > 
> > merkaba:/mnt/zeit>
> > 
> > merkaba:/mnt/zeit> vgdisplay justinsane
> 
> […]
> 
> > Enough insanity for today :)
> 
> Not quite:
> > I won´t mkfs.xfs on it, the 20 GiB of the just filesystem wouldn´t be
> > enough.
> 
> Ok, there seems to be another limit involved:
> 
> merkaba:/mnt/zeit> lvcreate -n yourbiggiexfs -L14E justinsane
>   Volume group "justinsane" has insufficient free space (3758096382
> extents): 3758096384 required.
[…]
> merkaba:/mnt/zeit#5> LANG=C lvcreate -n yourbiggiexfs -L13E justinsane
>   /dev/justinsane/yourbiggiexfs: lseek 0 failed: Invalid argument
>   /dev/justinsane/yourbiggiexfs: lseek 0 failed: Invalid argument
>   Logical volume "yourbiggiexfs" created
> merkaba:/mnt/zeit> LANG=C lvs
>   /dev/justinsane/yourbiggiexfs: lseek 14987979559888945152 failed:
> Invalid argument
>   /dev/justinsane/yourbiggiexfs: lseek 14987979559889002496 failed:
> Invalid argument

Well,

merkaba:/mnt/zeit> LANG=C ls -l
total 24
-rw-r--r-- 1 root root 8070450532247928832 Mar 11 23:02 evenmorecrazy.img
-rw-r--r-- 1 root root 8070450532247928832 Mar 11 23:02 evenmorecrazy.img2

looks crazy enough for me already.

(Ok, I really stop this now :)

Thanks,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Maximum file system size of XFS?
  2013-03-11 21:45 ` Martin Steigerwald
  2013-03-11 21:57   ` Martin Steigerwald
@ 2013-03-11 22:10   ` Martin Steigerwald
  1 sibling, 0 replies; 16+ messages in thread
From: Martin Steigerwald @ 2013-03-11 22:10 UTC (permalink / raw)
  To: xfs; +Cc: Pascal

Am Montag, 11. März 2013 schrieb Martin Steigerwald:
> Am Samstag, 9. März 2013 schrieb Pascal:
> > Hello,
> 
> Hi Pascal,
> 
> > I am asking you because I am insecure about the correct answer and
> > different sources give me different numbers.
> >
> > 
> > 
> >
> > My question is: What is the maximum file system size of XFS?
> >
> > 
> >
> > The official page says: 2^63 = 9 x 10^18 = 9 exabytes
> > Source: http://oss.sgi.com/projects/xfs/
> >
> > 
> >
> > Wikipedia says 16 exabytes.
> > Source: https://en.wikipedia.org/wiki/XFS
> >
> > 
> >
> > Another reference books says 8 exabytes (2^63).
> >
> > 
> > 
> >
> > Can anyone tell me and explain what is the maximum file system size for
> > XFS?
> 
> You can test it. The theoretical limit. Whether such a filesystem will
> work  nicely with a real workload is, as pointed out, a different
> question.

Well, as I just demonstrated, you can´t. At least not with XFS within XFS. 
You can only test for maximum filesize. If XFS can be larger than that, you 
need a filesystem which can carry an even larger file. Maybe ZFS on Linux or 
so?

BTRFS doesn´t go beyond 8 EiB per file as well:

merkaba:/mnt#1> mkfs.btrfs -n 16384 -l 16384 /dev/merkaba/zeit

WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

fs created label (null) on /dev/merkaba/zeit
        nodesize 16384 leafsize 16384 sectorsize 4096 size 20.00GB
Btrfs Btrfs v0.19
merkaba:/mnt> mount /dev/merkaba/zeit zeit
merkaba:/mnt> cd zeit
merkaba:/mnt/zeit> truncate -s1E canitgetcrazierthanthat.img
merkaba:/mnt/zeit> truncate -s2E canitgetcrazierthanthat.img
merkaba:/mnt/zeit> truncate -s3E canitgetcrazierthanthat.img
merkaba:/mnt/zeit> truncate -s4E canitgetcrazierthanthat.img
merkaba:/mnt/zeit> truncate -s5E canitgetcrazierthanthat.img
merkaba:/mnt/zeit> truncate -s6E canitgetcrazierthanthat.img
merkaba:/mnt/zeit> truncate -s7E canitgetcrazierthanthat.img
merkaba:/mnt/zeit> truncate -s8E canitgetcrazierthanthat.img
truncate: ungültige Zahl „8E“: Der Wert ist zu groß für den definierten 
Datentyp
merkaba:/mnt/zeit#1> LANG=C truncate -s8E canitgetcrazierthanthat.im
truncate: invalid number '8E': Value too large for defined data type

merkaba:/mnt/zeit#1> ls -lh
insgesamt 0
-rw-r--r-- 1 root root 7,0E Mär 11 23:09 canitgetcrazierthanthat.img

Thanks,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Maximum file system size of XFS?
  2013-03-11 21:57   ` Martin Steigerwald
  2013-03-11 22:01     ` Martin Steigerwald
@ 2013-03-11 22:19     ` Dave Chinner
  1 sibling, 0 replies; 16+ messages in thread
From: Dave Chinner @ 2013-03-11 22:19 UTC (permalink / raw)
  To: Martin Steigerwald; +Cc: Pascal, xfs

On Mon, Mar 11, 2013 at 10:57:00PM +0100, Martin Steigerwald wrote:
> Am Montag, 11. März 2013 schrieb Martin Steigerwald:
> > 2) Create a insanely big sparse file
> > 
> > merkaba:~> truncate -s1E /mnt/zeit/evenmorecrazy.img
> > merkaba:~> ls -lh /mnt/zeit/evenmorecrazy.img
> > -rw-r--r-- 1 root root 1,0E Mär 11 22:37 /mnt/zeit/evenmorecrazy.img
> > 
> > (No, this won´t work with Ext4.)
> 
> Okay, you can´t go beyond 8 EiB for a single file which is about what I have 
> read somewhere:

Right - file size offsets max out at 2^63 bytes.

....

> merkaba:/mnt/zeit#5> vgcreate --physicalextentsize 4G  justinsane /dev/loop0 
> /dev/loop1
>   Volume group "justinsane" successfully created
> 
> merkaba:/mnt/zeit> vgs                                                               
>   VG         #PV #LV #SN Attr   VSize   VFree 
>   justinsane   2   0   0 wz--n-  14,00e 14,00e
>   merkaba      1   4   0 wz--n- 278,99g  4,85g
> merkaba:/mnt/zeit>
....
> Enough insanity for today :)
> 
> I won´t mkfs.xfs on it, the 20 GiB of the just filesystem wouldn´t be 
> enough.

Right - I did a mkfs.xfs on a (8EB - 1GB) file a couple of days ago
just to check it worked. I killed it after a short while, because I
didn't feel like needlessly subjecting the SSDs the file was
physically located on to the 25 million sparse sector sized writes
needed for mkfs to complete.

And you can double that number of writes needed for a 16EB
filesystem to be initialised by mkfs. So, theory be damned, even
mkfs.xfs doesn't scale to supporting exabyte filesystems...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Maximum file system size of XFS?
  2013-03-11 22:04       ` Martin Steigerwald
@ 2013-03-20 18:26         ` Pascal
  0 siblings, 0 replies; 16+ messages in thread
From: Pascal @ 2013-03-20 18:26 UTC (permalink / raw)
  To: linux-xfs

Am Mon, 11 Mar 2013 23:04:14 +0100
schrieb Martin Steigerwald <Martin@lichtvoll.de>:

> > > Enough insanity for today :)

Hey Martin,

I am very impressed by the time and effort you spent into it! Thank you!


And thanks to everyone for answering my question!



_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2013-03-20 18:26 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-03-09 20:51 Maximum file system size of XFS? Pascal
2013-03-09 22:29 ` Ric Wheeler
2013-03-09 22:39   ` Pascal
2013-03-10  1:10     ` Eric Sandeen
2013-03-10  7:54       ` Stan Hoeppner
2013-03-11 11:02         ` Stan Hoeppner
2013-03-11 16:15           ` Hans-Peter Jansen
2013-03-11 16:22             ` Emmanuel Florac
2013-03-11  1:55     ` Dave Chinner
2013-03-11 21:45 ` Martin Steigerwald
2013-03-11 21:57   ` Martin Steigerwald
2013-03-11 22:01     ` Martin Steigerwald
2013-03-11 22:04       ` Martin Steigerwald
2013-03-20 18:26         ` Pascal
2013-03-11 22:19     ` Dave Chinner
2013-03-11 22:10   ` Martin Steigerwald

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.