All of lore.kernel.org
 help / color / mirror / Atom feed
* partition 100% full No space left on device. looks like xfs is corrupted or a bug
@ 2016-07-29  9:01 Lista Unx
  2016-07-29 10:48 ` Carlos E. R.
                   ` (3 more replies)
  0 siblings, 4 replies; 26+ messages in thread
From: Lista Unx @ 2016-07-29  9:01 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 7643 bytes --]

Hello xfs experts,

I am crawling in the dark from few days and I have no idea how to fix the following problem. On a centos 7 system:

# uname -a
Linux 1a 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

df is reporting 100% full of / and du is reporting only 1.7G usage from 50GB available (less than 4%). I want to mention that / is xfs. See below:

# df -a|grep ^/
/dev/mapper/centos-root  52403200 52400396      2804 100% /
                                     ^^^^^^^^^^   ^^^^^^^^^^
/dev/sda1                  503040   131876    371164  27% /boot
/dev/mapper/centos-home 210529792    35204 210494588   1% /home

du is estimating just 1.7G usage of /
# du -sch /* --exclude=home --exclude=boot
0       /bin
0       /dev
25M     /etc
0       /lib
0       /lib64
744K    /luarocks-2.3.0
0       /media
0       /mnt
125M    /openresty-1.9.7.4
0       /opt
420K    /root
49M     /run
0       /sbin
0       /srv
0       /sys
0       /tmp
1.3G    /usr
227M    /var
1.7G    total
[root@localhost ~]#

df is also reporting 80% of inode usage:

# df -i
Filesystem                 Inodes IUsed     IFree IUse% Mounted on
/dev/mapper/centos-root     78160 66218     11942   85% /
                                       ^^^^^^^^
devtmpfs                  8218272   519   8217753    1% /dev
tmpfs                     8221010     1   8221009    1% /dev/shm
tmpfs                     8221010   648   8220362    1% /run
tmpfs                     8221010    13   8220997    1% /sys/fs/cgroup
/dev/sda1                  509952   330    509622    1% /boot
/dev/mapper/centos-home 210632704    99 210632605    1% /home
tmpfs                     8221010     1   8221009    1% /run/user/0
#

/ partition is created on top of a LVM having also 50GB size.

# lvdisplay /dev/centos/root
  --- Logical volume ---
  LV Path                /dev/centos/root
  LV Name                root
  VG Name                centos

  LV Status              available
  # open                 1
  LV Size                50.00 GiB
  Current LE             12800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

I've already checked against rootkit without finding anything wrong!

I have another system, identical with this one which is healthy. The only difference I found between those systems is regarding max number of inodes available on / (which has the same size, 50GB on booth servers). On the second one (healthy), max number of inodes are ~52 milions and not only just ~85.000 as are reported on "seek" server.

# df -i|grep ^/
/dev/mapper/centos-root  52424704 66137  52358567    1% /
                                   ^^^^^^^^^^^^^
/dev/sda1                                  509952   330    509622    1% /boot
/dev/mapper/centos-home 210632704    26 210632678    1% /home
[root@localhost ~]#

Suspected also large number of files on /. Counted total number of files and or booth servers are the same: ~180K. So no difference here.

Look to find also files larger than 100M and on booth servers and found just 1 (104M size):

find / -type f -size +100000k -exec ls -lh {} \;
#
/usr/lib/locale/locale-archive
#

Looking to find files larger than 10M, I found just ~20 on booth servers.

# find / -type f -size +10000k -exec ls -lh {} \; |wc -l
16
#

So for sure, there are NO files exhausting free space.

On booth servers, number of used inodes are identical: ~66K. Also xfs_info report is identical for booth. What is different is number of AVAILABLE inodes: 85K (on seek node) vs 52 milion (on healthy node)!!! How is possible that!!! Booth servers has the same size (50GB) for /!

#lsof -nP |grep -i delete|wc -l
0
#find /proc/*/fd -ls | grep -i dele|wc -l
0

so lsof and find does not report anything wrong (any file deleted and still open)!

reboot does not fix the problem, / remain 100% full

After reboot, on 25th July:

# df -ah|grep centos-root
/dev/mapper/centos-root   50G   50G  4.0M 100% /
#

Also max number of inodes = 67k:
# df -i
Filesystem                 Inodes IUsed     IFree IUse% Mounted on
/dev/mapper/centos-root     66960 66165       795   99% /
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
devtmpfs                  8218272   519   8217753    1% /dev
tmpfs                     8221010     1   8221009    1% /dev/shm
tmpfs                     8221010   630   8220380    1% /run
tmpfs                     8221010    13   8220997    1% /sys/fs/cgroup
/dev/sda1                  509952   330    509622    1% /boot
/dev/mapper/centos-home 210632704    28 210632676    1% /home
tmpfs                     8221010     1   8221009    1% /run/user/0
#

Lets try to run intentionally xfs_grow (which normally should not produce any change)

# xfs_growfs /dev/mapper/centos-root
meta-data=/dev/mapper/centos-root isize=256    agcount=16, agsize=819136 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=13106176, imaxpct=25
         =                       sunit=64     swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=6400, version=2
         =                       sectsz=512   sunit=64 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 13106176 to 13107200
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#

Partition remain the same, 50GB size:
[root@nl-hvs-ov001a ~]# df -ah|grep centos-root
/dev/mapper/centos-root   50G   50G  4.0M 100% /

But number of inodes INCREASED with more tha 20%!!!
# df -i
Filesystem                 Inodes IUsed     IFree IUse% Mounted on
/dev/mapper/centos-root     83200 66165     17035   80% /
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
devtmpfs                  8218272   519   8217753    1% /dev
tmpfs                     8221010     1   8221009    1% /dev/shm
tmpfs                     8221010   630   8220380    1% /run
tmpfs                     8221010    13   8220997    1% /sys/fs/cgroup
/dev/sda1                  509952   330    509622    1% /boot
/dev/mapper/centos-home 210632704    28 210632676    1% /home
tmpfs                     8221010     1   8221009    1% /run/user/0
#

On 27July without changing anything there, max number inodes available for / decreased to ~67k (the same size like 2 days ago, before xfs_grow)!

# df -i
Filesystem                 Inodes IUsed     IFree IUse% Mounted on
/dev/mapper/centos-root     67024 66225       799   99% /
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
devtmpfs                  8218272   519   8217753    1% /dev
tmpfs                     8221010     1   8221009    1% /dev/shm
tmpfs                     8221010   632   8220378    1% /run
tmpfs                     8221010    13   8220997    1% /sys/fs/cgroup
/dev/mapper/centos-home 210632704    99 210632605    1% /home
/dev/sda1                  509952   330    509622    1% /boot
tmpfs                     8221010     1   8221009    1% /run/user/0
#

Please note that all that time, number of files remain unchanged ~180K, the same for inodes used, the number remain constant ~66K. Just max number of inodes available decreased which is an abnormal behavior.

How can be fixed? Looks like xfs is crrupted or like a bug.

Thanks in advance for help.
Alex

[-- Attachment #1.2: Type: text/html, Size: 18138 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: partition 100% full No space left on device. looks like xfs is corrupted or a bug
  2016-07-29  9:01 partition 100% full No space left on device. looks like xfs is corrupted or a bug Lista Unx
@ 2016-07-29 10:48 ` Carlos E. R.
  2016-07-29 14:27   ` partition 100% full No space left on device. looks like xfs iscorrupted " Lista Unx
  2016-07-29 14:03 ` partition 100% full No space left on device. looks like xfs is corrupted " Brian Foster
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 26+ messages in thread
From: Carlos E. R. @ 2016-07-29 10:48 UTC (permalink / raw)
  To: XFS mailing list

[-- Attachment #1: Type: text/plain, Size: 756 bytes --]

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256



El 2016-07-29 a las 12:01 +0300, Lista Unx escribió:

>Hello xfs experts,
> 
>I am crawling in the dark from few days and I have no idea how to fix the following problem. On a centos 7 system:

I'm not an expert, far from it, but... may I suggest you add the output 
of a plain "mount" command? To show the partitions. Or perhaps:

lsblk --output NAME,TYPE,FSTYPE,SIZE,TYPE,MOUNTPOINT


- -- 
Cheers
        Carlos E. R.

        (from 13.1 x86_64 "Bottle" (Minas Tirith))
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iF4EAREIAAYFAlebNGQACgkQja8UbcUWM1y9WgEAhICHy+Td+nf5SFkIXshTL0hi
9KQinIbllstvICOAPhUBAIcP1gcc+LmfO6b4f4gsgUH6L3dwemQoeW6OYrdPvV/Q
=NyhW
-----END PGP SIGNATURE-----

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: partition 100% full No space left on device. looks like xfs is corrupted or a bug
  2016-07-29  9:01 partition 100% full No space left on device. looks like xfs is corrupted or a bug Lista Unx
  2016-07-29 10:48 ` Carlos E. R.
@ 2016-07-29 14:03 ` Brian Foster
  2016-07-29 14:37   ` partition 100% full No space left on device. looks like xfs iscorrupted " Lista Unx
  2016-07-29 21:49 ` partition 100% full No space left on device. looks like xfs is corrupted " Eric Sandeen
  2016-07-29 23:35 ` partition 100% full No space left on device. looks like xfs is corrupted " Dave Chinner
  3 siblings, 1 reply; 26+ messages in thread
From: Brian Foster @ 2016-07-29 14:03 UTC (permalink / raw)
  To: Lista Unx; +Cc: xfs

On Fri, Jul 29, 2016 at 12:01:42PM +0300, Lista Unx wrote:
> Hello xfs experts,
> 
> I am crawling in the dark from few days and I have no idea how to fix the following problem. On a centos 7 system:
> 
> # uname -a
> Linux 1a 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
> 
> df is reporting 100% full of / and du is reporting only 1.7G usage from 50GB available (less than 4%). I want to mention that / is xfs. See below:
> 

First and foremost, have you run 'xfs_repair -n' to see if the fs is
healthy? If so, the next thing I would probably try is mount from a
single user mode of some sort (or boot a livecd) and recheck from there
to rule out any OS runtime weirdness going on (open but unlinked files,
files hidden under mount points, etc.).

Brian

> # df -a|grep ^/
> /dev/mapper/centos-root  52403200 52400396      2804 100% /
>                                      ^^^^^^^^^^   ^^^^^^^^^^
> /dev/sda1                  503040   131876    371164  27% /boot
> /dev/mapper/centos-home 210529792    35204 210494588   1% /home
> 
> du is estimating just 1.7G usage of /
> # du -sch /* --exclude=home --exclude=boot
> 0       /bin
> 0       /dev
> 25M     /etc
> 0       /lib
> 0       /lib64
> 744K    /luarocks-2.3.0
> 0       /media
> 0       /mnt
> 125M    /openresty-1.9.7.4
> 0       /opt
> 420K    /root
> 49M     /run
> 0       /sbin
> 0       /srv
> 0       /sys
> 0       /tmp
> 1.3G    /usr
> 227M    /var
> 1.7G    total
> [root@localhost ~]#
> 
> df is also reporting 80% of inode usage:
> 
> # df -i
> Filesystem                 Inodes IUsed     IFree IUse% Mounted on
> /dev/mapper/centos-root     78160 66218     11942   85% /
>                                        ^^^^^^^^
> devtmpfs                  8218272   519   8217753    1% /dev
> tmpfs                     8221010     1   8221009    1% /dev/shm
> tmpfs                     8221010   648   8220362    1% /run
> tmpfs                     8221010    13   8220997    1% /sys/fs/cgroup
> /dev/sda1                  509952   330    509622    1% /boot
> /dev/mapper/centos-home 210632704    99 210632605    1% /home
> tmpfs                     8221010     1   8221009    1% /run/user/0
> #
> 
> / partition is created on top of a LVM having also 50GB size.
> 
> # lvdisplay /dev/centos/root
>   --- Logical volume ---
>   LV Path                /dev/centos/root
>   LV Name                root
>   VG Name                centos
> 
>   LV Status              available
>   # open                 1
>   LV Size                50.00 GiB
>   Current LE             12800
>   Segments               1
>   Allocation             inherit
>   Read ahead sectors     auto
>   - currently set to     256
>   Block device           253:0
> 
> I've already checked against rootkit without finding anything wrong!
> 
> I have another system, identical with this one which is healthy. The only difference I found between those systems is regarding max number of inodes available on / (which has the same size, 50GB on booth servers). On the second one (healthy), max number of inodes are ~52 milions and not only just ~85.000 as are reported on "seek" server.
> 
> # df -i|grep ^/
> /dev/mapper/centos-root  52424704 66137  52358567    1% /
>                                    ^^^^^^^^^^^^^
> /dev/sda1                                  509952   330    509622    1% /boot
> /dev/mapper/centos-home 210632704    26 210632678    1% /home
> [root@localhost ~]#
> 
> Suspected also large number of files on /. Counted total number of files and or booth servers are the same: ~180K. So no difference here.
> 
> Look to find also files larger than 100M and on booth servers and found just 1 (104M size):
> 
> find / -type f -size +100000k -exec ls -lh {} \;
> #
> /usr/lib/locale/locale-archive
> #
> 
> Looking to find files larger than 10M, I found just ~20 on booth servers.
> 
> # find / -type f -size +10000k -exec ls -lh {} \; |wc -l
> 16
> #
> 
> So for sure, there are NO files exhausting free space.
> 
> On booth servers, number of used inodes are identical: ~66K. Also xfs_info report is identical for booth. What is different is number of AVAILABLE inodes: 85K (on seek node) vs 52 milion (on healthy node)!!! How is possible that!!! Booth servers has the same size (50GB) for /!
> 
> #lsof -nP |grep -i delete|wc -l
> 0
> #find /proc/*/fd -ls | grep -i dele|wc -l
> 0
> 
> so lsof and find does not report anything wrong (any file deleted and still open)!
> 
> reboot does not fix the problem, / remain 100% full
> 
> After reboot, on 25th July:
> 
> # df -ah|grep centos-root
> /dev/mapper/centos-root   50G   50G  4.0M 100% /
> #
> 
> Also max number of inodes = 67k:
> # df -i
> Filesystem                 Inodes IUsed     IFree IUse% Mounted on
> /dev/mapper/centos-root     66960 66165       795   99% /
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> devtmpfs                  8218272   519   8217753    1% /dev
> tmpfs                     8221010     1   8221009    1% /dev/shm
> tmpfs                     8221010   630   8220380    1% /run
> tmpfs                     8221010    13   8220997    1% /sys/fs/cgroup
> /dev/sda1                  509952   330    509622    1% /boot
> /dev/mapper/centos-home 210632704    28 210632676    1% /home
> tmpfs                     8221010     1   8221009    1% /run/user/0
> #
> 
> Lets try to run intentionally xfs_grow (which normally should not produce any change)
> 
> # xfs_growfs /dev/mapper/centos-root
> meta-data=/dev/mapper/centos-root isize=256    agcount=16, agsize=819136 blks
>          =                       sectsz=512   attr=2, projid32bit=1
>          =                       crc=0        finobt=0
> data     =                       bsize=4096   blocks=13106176, imaxpct=25
>          =                       sunit=64     swidth=64 blks
> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> log      =internal               bsize=4096   blocks=6400, version=2
>          =                       sectsz=512   sunit=64 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> data blocks changed from 13106176 to 13107200
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> #
> 
> Partition remain the same, 50GB size:
> [root@nl-hvs-ov001a ~]# df -ah|grep centos-root
> /dev/mapper/centos-root   50G   50G  4.0M 100% /
> 
> But number of inodes INCREASED with more tha 20%!!!
> # df -i
> Filesystem                 Inodes IUsed     IFree IUse% Mounted on
> /dev/mapper/centos-root     83200 66165     17035   80% /
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> devtmpfs                  8218272   519   8217753    1% /dev
> tmpfs                     8221010     1   8221009    1% /dev/shm
> tmpfs                     8221010   630   8220380    1% /run
> tmpfs                     8221010    13   8220997    1% /sys/fs/cgroup
> /dev/sda1                  509952   330    509622    1% /boot
> /dev/mapper/centos-home 210632704    28 210632676    1% /home
> tmpfs                     8221010     1   8221009    1% /run/user/0
> #
> 
> On 27July without changing anything there, max number inodes available for / decreased to ~67k (the same size like 2 days ago, before xfs_grow)!
> 
> # df -i
> Filesystem                 Inodes IUsed     IFree IUse% Mounted on
> /dev/mapper/centos-root     67024 66225       799   99% /
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> devtmpfs                  8218272   519   8217753    1% /dev
> tmpfs                     8221010     1   8221009    1% /dev/shm
> tmpfs                     8221010   632   8220378    1% /run
> tmpfs                     8221010    13   8220997    1% /sys/fs/cgroup
> /dev/mapper/centos-home 210632704    99 210632605    1% /home
> /dev/sda1                  509952   330    509622    1% /boot
> tmpfs                     8221010     1   8221009    1% /run/user/0
> #
> 
> Please note that all that time, number of files remain unchanged ~180K, the same for inodes used, the number remain constant ~66K. Just max number of inodes available decreased which is an abnormal behavior.
> 
> How can be fixed? Looks like xfs is crrupted or like a bug.
> 
> Thanks in advance for help.
> Alex

> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: partition 100% full No space left on device. looks like xfs iscorrupted or a bug
  2016-07-29 10:48 ` Carlos E. R.
@ 2016-07-29 14:27   ` Lista Unx
  0 siblings, 0 replies; 26+ messages in thread
From: Lista Unx @ 2016-07-29 14:27 UTC (permalink / raw)
  To: Carlos E. R., XFS mailing list

# mount|grep ^/
/dev/mapper/centos-root on / type xfs 
(rw,relatime,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
/dev/sda1 on /boot type xfs 
(rw,relatime,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
/dev/mapper/centos-home on /home type xfs 
(rw,relatime,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
#

also checked xfs_info / and output is the same for booth (sick in healthy) 
servers.

----- Original Message ----- 
From: "Carlos E. R." <robin.listas@telefonica.net>
To: "XFS mailing list" <xfs@oss.sgi.com>
Sent: Friday, July 29, 2016 1:48 PM
Subject: Re: partition 100% full No space left on device. looks like xfs 
iscorrupted or a bug


> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
>
>
> El 2016-07-29 a las 12:01 +0300, Lista Unx escribió:
>
>>Hello xfs experts,
>>
>>I am crawling in the dark from few days and I have no idea how to fix the 
>>following problem. On a centos 7 system:
>
> I'm not an expert, far from it, but... may I suggest you add the output
> of a plain "mount" command? To show the partitions. Or perhaps:
>
> lsblk --output NAME,TYPE,FSTYPE,SIZE,TYPE,MOUNTPOINT
>
>
> - -- 
> Cheers
>        Carlos E. R.
>
>        (from 13.1 x86_64 "Bottle" (Minas Tirith))
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2.0.22 (GNU/Linux)
>
> iF4EAREIAAYFAlebNGQACgkQja8UbcUWM1y9WgEAhICHy+Td+nf5SFkIXshTL0hi
> 9KQinIbllstvICOAPhUBAIcP1gcc+LmfO6b4f4gsgUH6L3dwemQoeW6OYrdPvV/Q
> =NyhW
> -----END PGP SIGNATURE-----


--------------------------------------------------------------------------------


> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: partition 100% full No space left on device. looks like xfs iscorrupted or a bug
  2016-07-29 14:03 ` partition 100% full No space left on device. looks like xfs is corrupted " Brian Foster
@ 2016-07-29 14:37   ` Lista Unx
  2016-07-29 15:20     ` Brian Foster
  0 siblings, 1 reply; 26+ messages in thread
From: Lista Unx @ 2016-07-29 14:37 UTC (permalink / raw)
  To: Brian Foster; +Cc: xfs


----- Original Message ----- 
From: "Brian Foster" <bfoster@redhat.com>
To: "Lista Unx" <lista.unx@gmail.com>
Cc: <xfs@oss.sgi.com>
Sent: Friday, July 29, 2016 5:03 PM
Subject: Re: partition 100% full No space left on device. looks like xfs 
iscorrupted or a bug


> First and foremost, have you run 'xfs_repair -n' to see if the fs is
> healthy? If so, the next thing I would probably try is mount from a
> single user mode of some sort (or boot a livecd) and recheck from there
> to rule out any OS runtime weirdness going on (open but unlinked files,
> files hidden under mount points, etc.).
>
> Brian
>

That's I want to do before to post here. I have access just via ssh. and 
xfs_repair require to work offline (partition to not be mounted). I do not 
have for the momment to access server using ILO (will take another few weeks 
for that) and that's the reason I post here, maybe we can coclude without 
putting server down.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: partition 100% full No space left on device. looks like xfs iscorrupted or a bug
  2016-07-29 14:37   ` partition 100% full No space left on device. looks like xfs iscorrupted " Lista Unx
@ 2016-07-29 15:20     ` Brian Foster
  0 siblings, 0 replies; 26+ messages in thread
From: Brian Foster @ 2016-07-29 15:20 UTC (permalink / raw)
  To: Lista Unx; +Cc: xfs

On Fri, Jul 29, 2016 at 05:37:19PM +0300, Lista Unx wrote:
> 
> ----- Original Message ----- From: "Brian Foster" <bfoster@redhat.com>
> To: "Lista Unx" <lista.unx@gmail.com>
> Cc: <xfs@oss.sgi.com>
> Sent: Friday, July 29, 2016 5:03 PM
> Subject: Re: partition 100% full No space left on device. looks like xfs
> iscorrupted or a bug
> 
> 
> > First and foremost, have you run 'xfs_repair -n' to see if the fs is
> > healthy? If so, the next thing I would probably try is mount from a
> > single user mode of some sort (or boot a livecd) and recheck from there
> > to rule out any OS runtime weirdness going on (open but unlinked files,
> > files hidden under mount points, etc.).
> > 
> > Brian
> > 
> 
> That's I want to do before to post here. I have access just via ssh. and
> xfs_repair require to work offline (partition to not be mounted). I do not
> have for the momment to access server using ILO (will take another few weeks
> for that) and that's the reason I post here, maybe we can coclude without
> putting server down.
> 

I guess you could kill what you can, inspect any open files with
fuser/lsof, and try to unmount as much as possible. Otherwise, I'm not
sure how far you can get until you have the ability to check the fs.
Perhaps others have more ideas...

Brian

> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: partition 100% full No space left on device. looks like xfs is corrupted or a bug
  2016-07-29  9:01 partition 100% full No space left on device. looks like xfs is corrupted or a bug Lista Unx
  2016-07-29 10:48 ` Carlos E. R.
  2016-07-29 14:03 ` partition 100% full No space left on device. looks like xfs is corrupted " Brian Foster
@ 2016-07-29 21:49 ` Eric Sandeen
  2016-08-01 11:24   ` partition 100% full No space left on device. looks like xfs iscorrupted " Lista Unx
  2016-07-29 23:35 ` partition 100% full No space left on device. looks like xfs is corrupted " Dave Chinner
  3 siblings, 1 reply; 26+ messages in thread
From: Eric Sandeen @ 2016-07-29 21:49 UTC (permalink / raw)
  To: xfs

On 7/29/16 4:01 AM, Lista Unx wrote:
> Hello xfs experts,
>  
> I am crawling in the dark from few days and I have no idea how to fix the following problem. On a centos 7 system:
>  
> # uname -a
> Linux 1a 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
>  
> df is reporting 100% full of / and du is reporting only 1.7G usage from 50GB available (less than 4%). I want to mention that / is xfs. See below:
>  
> # df -a|grep ^/
> /dev/mapper/centos-root  52403200 52400396      2804 100% /
>                                      ^^^^^^^^^^   ^^^^^^^^^^
> /dev/sda1                  503040   131876    371164  27% /boot
> /dev/mapper/centos-home 210529792    35204 210494588   1% /home
>  
> du is estimating just 1.7G usage of /
> # du -sch /* --exclude=home --exclude=boot

...

> 0       /lib64
> 744K    /luarocks-2.3.0
> 0       /media
> 0       /mnt
> 125M    /openresty-1.9.7.4
> 0       /opt
> 420K    /root
> 49M     /run
> 0       /sbin
> 0       /srv
> 0       /sys
> 0       /tmp
> 1.3G    /usr
> 227M    /var
> 1.7G    total
> [root@localhost ~]#

Can you include full contents of /proc/mounts?

If you have something bind-mounted or similar, it will hide it from "du" traversal.

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: partition 100% full No space left on device. looks like xfs is corrupted or a bug
  2016-07-29  9:01 partition 100% full No space left on device. looks like xfs is corrupted or a bug Lista Unx
                   ` (2 preceding siblings ...)
  2016-07-29 21:49 ` partition 100% full No space left on device. looks like xfs is corrupted " Eric Sandeen
@ 2016-07-29 23:35 ` Dave Chinner
  2016-08-01 12:00   ` partition 100% full No space left on device. looks like xfs iscorrupted " Lista Unx
  3 siblings, 1 reply; 26+ messages in thread
From: Dave Chinner @ 2016-07-29 23:35 UTC (permalink / raw)
  To: Lista Unx; +Cc: xfs

On Fri, Jul 29, 2016 at 12:01:42PM +0300, Lista Unx wrote:
> Hello xfs experts,
> 
> I am crawling in the dark from few days and I have no idea how to fix the following problem. On a centos 7 system:

Ok, so you followed my advice on why you couldn't post to the list,
but you ignored my answer as to the cause of the changing numbers of
inodes. I'll repeat it here for the benefit of everyone, so they
don't waste time chasing ghosts.

That is, inodes are dynamically allocated so the number of supported
inodes is directly proportional to the amount of free space left in
the filesystem. You have filesystems with different amounts of free
space, so the number of inodes the filesystem can support is
different. free up some space, the number goes up. Used some space,
the number goes down. This is expected.

Hence the only thing that may be an issue is this:

> # uname -a
> Linux 1a 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
> 
> df is reporting 100% full of / and du is reporting only 1.7G usage from 50GB available (less than 4%). I want to mention that / is xfs. See below:
> 
> # df -a|grep ^/
> /dev/mapper/centos-root  52403200 52400396      2804 100% /
>                                      ^^^^^^^^^^   ^^^^^^^^^^
> /dev/sda1                  503040   131876    371164  27% /boot
> /dev/mapper/centos-home 210529792    35204 210494588   1% /home
> 
> du is estimating just 1.7G usage of /
> # du -sch /* --exclude=home --exclude=boot
.....
> 1.7G    total
> [root@localhost ~]#

That's probably because there are open but unlinked files present in
the filesystem, and du will not find them. e.g. large O_TMPFILE
files, or files that applications are using as scratch space. You
may even have zombie processes hanging about holding unlinked files
open.

lsof might find those files, it might not. There might also be
orphan inodes on the unlinked lists, and without an unclean shutdown
log recovery won't process them. So it may simply be best to run
sync, then press the reset button to do a hard restart which
will trigger log recovery on restart. If the problem still persists,
then xfs_repair is really the only option to find out where the
space has gone and recover it.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: partition 100% full No space left on device. looks like xfs iscorrupted or a bug
  2016-07-29 21:49 ` partition 100% full No space left on device. looks like xfs is corrupted " Eric Sandeen
@ 2016-08-01 11:24   ` Lista Unx
  0 siblings, 0 replies; 26+ messages in thread
From: Lista Unx @ 2016-08-01 11:24 UTC (permalink / raw)
  To: xfs


----- Original Message ----- 
From: "Eric Sandeen" <sandeen@sandeen.net>
To: <xfs@oss.sgi.com>
Sent: Saturday, July 30, 2016 12:49 AM
Subject: Re: partition 100% full No space left on device. looks like xfs 
iscorrupted or a bug



> Can you include full contents of /proc/mounts?
>
> If you have something bind-mounted or similar, it will hide it from "du" 
> traversal.
>
> -Eric
>

Yes, see below:
# cat /proc/mounts
rootfs / rootfs rw 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,nosuid,size=32873088k,nr_inodes=8218272,mode=755 0 
0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 
0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 
0 0
tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup 
rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups 
                                               -agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup 
rw,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 
0
cgroup /sys/fs/cgroup/perf_event cgroup 
rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 
0 0
cgroup /sys/fs/cgroup/net_cls cgroup rw,nosuid,nodev,noexec,relatime,net_cls 
0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 
0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 
0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 
0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
/dev/mapper/centos-root / xfs 
rw,relatime,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs 
rw,relatime,fd=28,pgrp=1,timeout=300,minproto=5,maxproto=5,direct 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
nfsd /proc/fs/nfsd nfsd rw,relatime 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
/dev/mapper/centos-home /home xfs 
rw,relatime,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota 0 0
/dev/sda1 /boot xfs 
rw,relatime,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota 0 0
tmpfs /run/user/0 tmpfs rw,nosuid,nodev,relatime,size=6576812k,mode=700 0 0 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: partition 100% full No space left on device. looks like xfs iscorrupted or a bug
  2016-07-29 23:35 ` partition 100% full No space left on device. looks like xfs is corrupted " Dave Chinner
@ 2016-08-01 12:00   ` Lista Unx
  2016-08-01 12:23     ` Carlos E. R.
                       ` (2 more replies)
  0 siblings, 3 replies; 26+ messages in thread
From: Lista Unx @ 2016-08-01 12:00 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs


----- Original Message ----- 
From: "Dave Chinner" <david@fromorbit.com>
To: "Lista Unx" <lista.unx@gmail.com>
Cc: <xfs@oss.sgi.com>
Sent: Saturday, July 30, 2016 2:35 AM
Subject: Re: partition 100% full No space left on device. looks like xfs 
iscorrupted or a bug


> Ok, so you followed my advice on why you couldn't post to the list,

Yes, I've created a new gmail account especially to be able to post to this 
mailing list which is filtering very seriously legit messages comming from 
legit usres, just because they are comming from yahoo accounts (servers) ... 
but is allowing ANYONE else to post here WITHOUT having valid subscription 
and also WITHOUT any minimal intention to post here something which has or 
is related to XFS. Just in last days, I was informed about new microwave 
acquisition, plastic delivery, or any other craps arriving here from a 
"trusted and very legit" source, like gmail. That's sound like really a very 
good job!

> but you ignored my answer as to the cause of the changing numbers of
> inodes. I'll repeat it here for the benefit of everyone, so they
> don't waste time chasing ghosts.

No, not at all, is not my style. Just mentioned twice to you, that we are 
not talking about number of inode usage. we are talking about max number of 
inodes which differ with at least 10 times less, for partitions with THE 
SAME SIZE AND USAGE!

> That is, inodes are dynamically allocated so the number of supported
> inodes is directly proportional to the amount of free space left in
> the filesystem. You have filesystems with different amounts

NO! Booth systems are almost identical (minor differencies) and this has 
been stated very clear on my first post. That's not necessary to comment 
each line in my post, just to point us in the right direction.

> That's probably because there are open but unlinked files present in
> the filesystem, and du will not find them. e.g. large O_TMPFILE
> files, or files that applications are using as scratch space. You
> may even have zombie processes hanging about holding unlinked files
> open.

Has been mentioned on my first post, reboot does not solve problem, there 
are no (large, small or any kind of files) exahusting inodes!

>
> lsof might find those files, it might not. There might also be
> orphan inodes on the unlinked lists, and without an unclean shutdown
> log recovery won't process them.

Yes, also mentioned on my first post, lsof does not show anomalies ...

> So it may simply be best to run
> sync, then press the reset button to do a hard restart which
> will trigger log recovery on restart.

The same, mentioned on my first post, reboot (which will clean zombies) does 
not resolve issue.

> If the problem still persists,
> then xfs_repair is really the only option to find out where the
> space has gone and recover it.

Yes, that was also my conclusion BEFORE to post here. I did not have (yet) 
possibility to put / partition offline (or to not be mounted in order to run 
xfs_repair) and that's I asked here, considering that someone in the past 
encountered a simillar problem or in case not, there are few other things to 
be done, in order to consider that the last step to follow is to put server 
down for deep investigation.

I am still waiting approval to put server down for deep investigations 
(xfs_repaid & friends).

Have a nice day,
Alex 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: partition 100% full No space left on device. looks like xfs iscorrupted or a bug
  2016-08-01 12:00   ` partition 100% full No space left on device. looks like xfs iscorrupted " Lista Unx
@ 2016-08-01 12:23     ` Carlos E. R.
  2016-08-02 17:34       ` partition 100% full No space left on device. looks like xfsiscorrupted " Lista Unx
  2016-08-02 17:34       ` Lista Unx
  2016-08-01 16:51     ` partition 100% full No space left on device. looks like xfs iscorrupted " Chris Murphy
  2016-08-03 12:59     ` Spam on this list [Was: Re: partition 100% full No space left on device. looks like xfs iscorrupted or a bug] Carlos E. R.
  2 siblings, 2 replies; 26+ messages in thread
From: Carlos E. R. @ 2016-08-01 12:23 UTC (permalink / raw)
  To: XFS mailing list

[-- Attachment #1: Type: text/plain, Size: 1949 bytes --]

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Content-ID: <alpine.LSU.2.20.1608011420580.18210@zvanf-gvevgu.inyvabe>


El 2016-08-01 a las 15:00 +0300, Lista Unx escribió:
> El 2016-07-30 a las 09:35 +1000, Dave Chinner escribió:

>> That is, inodes are dynamically allocated so the number of supported
>> inodes is directly proportional to the amount of free space left in
>> the filesystem. You have filesystems with different amounts
>
> NO! Booth systems are almost identical (minor differencies) and this has been 
> stated very clear on my first post. That's not necessary to comment each line 
> in my post, just to point us in the right direction.

They are identical, except that one has free space and the other does 
not.

Number of inodes is dynamic and associated to free space. No free space, 
thus, almost no inodes available. One thing follows from the other.

>> That's probably because there are open but unlinked files present in
>> the filesystem, and du will not find them. e.g. large O_TMPFILE
>> files, or files that applications are using as scratch space. You
>> may even have zombie processes hanging about holding unlinked files
>> open.

> Has been mentioned on my first post, reboot does not solve problem, 
> there are no (large, small or any kind of files)
> exahusting inodes!

Dave refers to a unix/linux "feature". Files can be deleted, but if 
they are in use at the time, the contents are not deleted. Disk shows an 
ammount of free space that does not match the total - used space.

However, a reboot clears this situation, and you did say in the original 
post you had rebooted the system.

- -- 
Cheers
        Carlos E. R.

        (from 13.1 x86_64 "Bottle" (Minas Tirith))
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iF4EAREIAAYFAlefP00ACgkQja8UbcUWM1xEEQD+P6UaCgpP4L/ZES7wmVBBgLib
Rx78tVXJJKE5+FCQAY8BAI+6boqtqicVZA3PeOYM8PCa9IrRKyHffx3l5ty0LKpI
=h9xF
-----END PGP SIGNATURE-----

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: partition 100% full No space left on device. looks like xfs iscorrupted or a bug
  2016-08-01 12:00   ` partition 100% full No space left on device. looks like xfs iscorrupted " Lista Unx
  2016-08-01 12:23     ` Carlos E. R.
@ 2016-08-01 16:51     ` Chris Murphy
  2016-08-02 17:58       ` partition 100% full No space left on device. looks like xfsiscorrupted " Lista Unx
  2016-08-03 12:59     ` Spam on this list [Was: Re: partition 100% full No space left on device. looks like xfs iscorrupted or a bug] Carlos E. R.
  2 siblings, 1 reply; 26+ messages in thread
From: Chris Murphy @ 2016-08-01 16:51 UTC (permalink / raw)
  To: xfs

On Mon, Aug 1, 2016 at 6:00 AM, Lista Unx <lista.unx@gmail.com> wrote:

> Yes, I've created a new gmail account especially to be able to post to this
> mailing list which is filtering very seriously legit messages comming from
> legit usres, just because they are comming from yahoo accounts (servers) ...

It's a Yahoo policy. It's completely reasonable for lists to reject
yahoo.com emails, but ideally it'd reject them at signup time.

https://help.yahoo.com/kb/SLN24050.html
https://help.yahoo.com/kb/mail/SLN24016.html?impressions=true
http://www.pcworld.com/article/2141120/yahoo-email-antispoofing-policy-breaks-mailing-lists.html


-- 
Chris Murphy

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: partition 100% full No space left on device. looks like xfsiscorrupted or a bug
  2016-08-01 12:23     ` Carlos E. R.
@ 2016-08-02 17:34       ` Lista Unx
  2016-08-02 17:34       ` Lista Unx
  1 sibling, 0 replies; 26+ messages in thread
From: Lista Unx @ 2016-08-02 17:34 UTC (permalink / raw)
  To: Carlos E. R., XFS mailing list


----- Original Message ----- 
From: "Carlos E. R." <robin.listas@telefonica.net>
To: "XFS mailing list" <xfs@oss.sgi.com>
Sent: Monday, August 01, 2016 3:23 PM
Subject: Re: partition 100% full No space left on device. looks like 
xfsiscorrupted or a bug


> Dave refers to a unix/linux "feature". Files can be deleted, but if
> they are in use at the time, the contents are not deleted. Disk shows an
> ammount of free space that does not match the total - used space.
>

It haS been checked! See below a snip from my first post.

#lsof -nP |grep -i delete|wc -l
0
#find /proc/*/fd -ls | grep -i dele|wc -l
0

> However, a reboot clears this situation, and you did say in the original
> post you had rebooted the system.

Yes, tried also reboot which does not solve the problem.

Its obvious that till not run xfs_repair, we can't conclude more. This will 
happen next days ... 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: partition 100% full No space left on device. looks like xfsiscorrupted or a bug
  2016-08-01 12:23     ` Carlos E. R.
  2016-08-02 17:34       ` partition 100% full No space left on device. looks like xfsiscorrupted " Lista Unx
@ 2016-08-02 17:34       ` Lista Unx
  1 sibling, 0 replies; 26+ messages in thread
From: Lista Unx @ 2016-08-02 17:34 UTC (permalink / raw)
  To: Carlos E. R., XFS mailing list


----- Original Message ----- 
From: "Carlos E. R." <robin.listas@telefonica.net>
To: "XFS mailing list" <xfs@oss.sgi.com>
Sent: Monday, August 01, 2016 3:23 PM
Subject: Re: partition 100% full No space left on device. looks like 
xfsiscorrupted or a bug


> Dave refers to a unix/linux "feature". Files can be deleted, but if
> they are in use at the time, the contents are not deleted. Disk shows an
> ammount of free space that does not match the total - used space.
>

It haS been checked! See below a snip from my first post.

#lsof -nP |grep -i delete|wc -l
0
#find /proc/*/fd -ls | grep -i dele|wc -l
0

> However, a reboot clears this situation, and you did say in the original
> post you had rebooted the system.

Yes, tried also reboot which does not solve the problem.

Its obvious that till not run xfs_repair, we can't conclude more. This will 
happen next days ... 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: partition 100% full No space left on device. looks like xfsiscorrupted or a bug
  2016-08-01 16:51     ` partition 100% full No space left on device. looks like xfs iscorrupted " Chris Murphy
@ 2016-08-02 17:58       ` Lista Unx
  2016-08-02 19:11         ` Troy McCorkell
  0 siblings, 1 reply; 26+ messages in thread
From: Lista Unx @ 2016-08-02 17:58 UTC (permalink / raw)
  To: Chris Murphy, xfs


----- Original Message ----- 
From: "Chris Murphy" <lists@colorremedies.com>
To: <xfs@oss.sgi.com>
Sent: Monday, August 01, 2016 7:51 PM
Subject: Re: partition 100% full No space left on device. looks like 
xfsiscorrupted or a bug


> On Mon, Aug 1, 2016 at 6:00 AM, Lista Unx <lista.unx@gmail.com> wrote:
>
>> Yes, I've created a new gmail account especially to be able to post to 
>> this
>> mailing list which is filtering very seriously legit messages comming 
>> from
>> legit users, just because they are comming from yahoo accounts (servers) 
>> ...
>
> It's a Yahoo policy. It's completely reasonable for lists to reject
> yahoo.com emails, but ideally it'd reject them at signup time.
>
> https://help.yahoo.com/kb/SLN24050.html
> https://help.yahoo.com/kb/mail/SLN24016.html?impressions=true
> http://www.pcworld.com/article/2141120/yahoo-email-antispoofing-policy-breaks-mailing-lists.html
>
>

Ok, I understand. In this case:
1. Is a good idea to not allow those using yahoo accounts to subscribe to 
this list (with a clear message regarding why you are doing it)
and
2. Reject all messages comming from users which does not have valid 
membership on this list (for sure, you will reduce spam)

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: partition 100% full No space left on device. looks like xfsiscorrupted or a bug
  2016-08-02 17:58       ` partition 100% full No space left on device. looks like xfsiscorrupted " Lista Unx
@ 2016-08-02 19:11         ` Troy McCorkell
  0 siblings, 0 replies; 26+ messages in thread
From: Troy McCorkell @ 2016-08-02 19:11 UTC (permalink / raw)
  To: Lista Unx, Chris Murphy, xfs

>On Mon, Aug 1, 2016 at 7:51 AM, Lista Unx <lista.unx@gmail.com> wrote:
>
>> On Mon, Aug 1, 2016 at 6:00 AM, Lista Unx <lista.unx@gmail.com> wrote:
>>
>>> Yes, I've created a new gmail account especially to be able to post to
>>> this
>>> mailing list which is filtering very seriously legit messages comming
>>> from
>>> legit users, just because they are comming from yahoo accounts (servers)
>>> ...
>>
>> It's a Yahoo policy. It's completely reasonable for lists to reject
>> yahoo.com emails, but ideally it'd reject them at signup time.
>>
>> https://help.yahoo.com/kb/SLN24050.html
>> https://help.yahoo.com/kb/mail/SLN24016.html?impressions=true
>> http://www.pcworld.com/article/2141120/yahoo-email-antispoofing-policy-breaks-mailing-lists.html
>>
>>
>
>Ok, I understand. In this case:
>1. Is a good idea to not allow those using yahoo accounts to subscribe to
>this list (with a clear message regarding why you are doing it)
>and
>2. Reject all messages comming from users which does not have valid
>membership on this list (for sure, you will reduce spam)

The policy for the xfs@oss.sgi.com mailing list is it is open and unmoderated to allow any user
of XFS to communicate with the XFS community.    Yes, the policy does result in spam on the mailing list.

If you have problems with the mailing list please CC me on the email.

Thanks,
Troy McCorkell
SGI



_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Spam on this list  [Was: Re: partition 100% full No space left on device. looks like xfs iscorrupted or a bug]
  2016-08-01 12:00   ` partition 100% full No space left on device. looks like xfs iscorrupted " Lista Unx
  2016-08-01 12:23     ` Carlos E. R.
  2016-08-01 16:51     ` partition 100% full No space left on device. looks like xfs iscorrupted " Chris Murphy
@ 2016-08-03 12:59     ` Carlos E. R.
  2016-08-03 13:21       ` Martin Steigerwald
  2 siblings, 1 reply; 26+ messages in thread
From: Carlos E. R. @ 2016-08-03 12:59 UTC (permalink / raw)
  To: XFS mailing list

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1



On Monday, 2016-08-01 at 15:00 +0300, Lista Unx wrote:

> Yes, I've created a new gmail account especially to be able to post to this 
> mailing list which is filtering very seriously legit messages comming from 
> legit usres, just because they are comming from yahoo accounts (servers) ... 
> but is allowing ANYONE else to post here WITHOUT having valid subscription and 
> also WITHOUT any minimal intention to post here something which has or is 
> related to XFS. Just in last days, I was informed about new microwave 
> acquisition, plastic delivery, or any other craps arriving here from a 
> "trusted and very legit" source, like gmail. That's sound like really a very 
> good job!

:-)

Yes, Spam is bad in this list, and has increased recently. You will 
find, however, that gmail does a good job of filtering them out: you 
only have to mark as spam those that it does not detect, and conversely, 
clear out the false positives. It learns soon.

I also have problems with my ISP and Spam on this list.


You will find that the people on this list are very knowledgeable and 
will try to help you on your problem with XFS. Spam is not a thing on 
their power, though X'-)

- -- 
Cheers,
        Carlos E. R.
        (from 13.1 x86_64 "Bottle" at Telcontar)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iEYEARECAAYFAleh6p8ACgkQtTMYHG2NR9VwbACfQ48V7GoSWDjxkscKMZZBGbeW
qf8An0XDo7JRY1wOjQlVAqyE3Of/t6DG
=n207
-----END PGP SIGNATURE-----

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Spam on this list [Was: Re: partition 100% full No space left on device. looks like xfs iscorrupted or a bug]
  2016-08-03 12:59     ` Spam on this list [Was: Re: partition 100% full No space left on device. looks like xfs iscorrupted or a bug] Carlos E. R.
@ 2016-08-03 13:21       ` Martin Steigerwald
  2016-08-03 13:34         ` Carlos E. R.
  0 siblings, 1 reply; 26+ messages in thread
From: Martin Steigerwald @ 2016-08-03 13:21 UTC (permalink / raw)
  To: xfs; +Cc: Lista Unx, Carlos E. R.

Am Mittwoch, 3. August 2016, 14:59:11 CEST schrieb Carlos E. R.:
> On Monday, 2016-08-01 at 15:00 +0300, Lista Unx wrote:
> > Yes, I've created a new gmail account especially to be able to post to
> > this
> > mailing list which is filtering very seriously legit messages comming from
> > legit usres, just because they are comming from yahoo accounts (servers)
> > ... but is allowing ANYONE else to post here WITHOUT having valid
> > subscription and also WITHOUT any minimal intention to post here
> > something which has or is related to XFS. Just in last days, I was
> > informed about new microwave acquisition, plastic delivery, or any other
> > craps arriving here from a "trusted and very legit" source, like gmail.
> > That's sound like really a very good job!
> :
> :-)
> 
> Yes, Spam is bad in this list, and has increased recently. You will
> find, however, that gmail does a good job of filtering them out: you
> only have to mark as spam those that it does not detect, and conversely,
> clear out the false positives. It learns soon.
> 
> I also have problems with my ISP and Spam on this list.
> 
> 
> You will find that the people on this list are very knowledgeable and
> will try to help you on your problem with XFS. Spam is not a thing on
> their power, though X'-)

Lista, Carlos, I recommend: If you are concerned about spam on the list 
contact the listmasters of it. They are the ones that can address it globally 
for the list.

Or… improve your spamfilters. I did not see any of the spam mails you 
mentioned, Lista, as my mailserver rejected them on the SMTP level.

Discussing spam topics here just adds to the noise.

-- 
Martin

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Spam on this list [Was: Re: partition 100% full No space left on device. looks like xfs iscorrupted or a bug]
  2016-08-03 13:21       ` Martin Steigerwald
@ 2016-08-03 13:34         ` Carlos E. R.
  2016-08-03 23:15           ` Spam on this list Dave Chinner
  0 siblings, 1 reply; 26+ messages in thread
From: Carlos E. R. @ 2016-08-03 13:34 UTC (permalink / raw)
  To: XFS mail list


[-- Attachment #1.1: Type: text/plain, Size: 618 bytes --]

On 2016-08-03 15:21, Martin Steigerwald wrote:

> Lista, Carlos, I recommend: If you are concerned about spam on the list 
> contact the listmasters of it. They are the ones that can address it globally 
> for the list.

Oh, I did, long ago. Still waiting.

> Or… improve your spamfilters. I did not see any of the spam mails you 
> mentioned, Lista, as my mailserver rejected them on the SMTP level.

So does mine. Now and then it rejects spam, and the list automatically
stops my subscription as a consequence.

-- 
Cheers / Saludos,

		Carlos E. R.
		(from 13.1 x86_64 "Bottle" at Telcontar)


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Spam on this list
  2016-08-03 13:34         ` Carlos E. R.
@ 2016-08-03 23:15           ` Dave Chinner
  2016-08-03 23:29             ` Darrick J. Wong
                               ` (4 more replies)
  0 siblings, 5 replies; 26+ messages in thread
From: Dave Chinner @ 2016-08-03 23:15 UTC (permalink / raw)
  To: Carlos E. R.; +Cc: XFS mail list

On Wed, Aug 03, 2016 at 03:34:58PM +0200, Carlos E. R. wrote:
> On 2016-08-03 15:21, Martin Steigerwald wrote:
> 
> > Lista, Carlos, I recommend: If you are concerned about spam on the list 
> > contact the listmasters of it. They are the ones that can address it globally 
> > for the list.
> 
> Oh, I did, long ago. Still waiting.

Yes, that is the fundamental issue - spam filtering is essentially
controlled by SGI's internal infrastructure, which we have little
option on.

What it comes down to is whether we continue to use this list
(xfs@oss.sgi.com) or whether we move to linux-xfs@vger.kernel.org
so we get much more robust and up-to-date spam filtering. The issue
with doing this is forcing everyone to resubscribe, and then
capturing everything that is still sent to xfs@oss.sgi.com.

That said, I'm seriously tempted right now just to say "we're moving
to vger" and asking everyone to resubscribe to that list, and then
making xfs@oss.sgi.com respond with "list moved to vger, please
repost there". i.e. not even put a forwarding gateway in place.

If we do that, then I'll also shut down all the XFS git trees on
oss.sgi.com - I'll add commits to the them to say "go to
kernel.org". I'll need to work something out for the tarball
releases, but kernel.org does have functionality for that, too, so
that may just be a small change of process on my end (i.e. use kup).
Once that is done, we'll be running completely on community provided
infrastructure....

Thoughts?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Spam on this list
  2016-08-03 23:15           ` Spam on this list Dave Chinner
@ 2016-08-03 23:29             ` Darrick J. Wong
  2016-08-04  0:51             ` Carlos E. R.
                               ` (3 subsequent siblings)
  4 siblings, 0 replies; 26+ messages in thread
From: Darrick J. Wong @ 2016-08-03 23:29 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Carlos E. R., XFS mail list

On Thu, Aug 04, 2016 at 09:15:29AM +1000, Dave Chinner wrote:
> On Wed, Aug 03, 2016 at 03:34:58PM +0200, Carlos E. R. wrote:
> > On 2016-08-03 15:21, Martin Steigerwald wrote:
> > 
> > > Lista, Carlos, I recommend: If you are concerned about spam on the list 
> > > contact the listmasters of it. They are the ones that can address it globally 
> > > for the list.
> > 
> > Oh, I did, long ago. Still waiting.
> 
> Yes, that is the fundamental issue - spam filtering is essentially
> controlled by SGI's internal infrastructure, which we have little
> option on.
> 
> What it comes down to is whether we continue to use this list
> (xfs@oss.sgi.com) or whether we move to linux-xfs@vger.kernel.org
> so we get much more robust and up-to-date spam filtering. The issue
> with doing this is forcing everyone to resubscribe, and then
> capturing everything that is still sent to xfs@oss.sgi.com.
> 
> That said, I'm seriously tempted right now just to say "we're moving
> to vger" and asking everyone to resubscribe to that list, and then
> making xfs@oss.sgi.com respond with "list moved to vger, please
> repost there". i.e. not even put a forwarding gateway in place.
> 
> If we do that, then I'll also shut down all the XFS git trees on
> oss.sgi.com - I'll add commits to the them to say "go to
> kernel.org". I'll need to work something out for the tarball
> releases, but kernel.org does have functionality for that, too, so
> that may just be a small change of process on my end (i.e. use kup).
> Once that is done, we'll be running completely on community provided
> infrastructure....
> 
> Thoughts?

YAY!

By the way, could we update the docs on
http://xfs.org/index.php/XFS_Papers_and_Documentation ?

The filesystem structure guide is a little out of date.

(The user guide probably is too, but as I've only been sending patches
for the disk format guide I'm keeping my mouth shut about the others.)

--D

> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Spam on this list
  2016-08-03 23:15           ` Spam on this list Dave Chinner
  2016-08-03 23:29             ` Darrick J. Wong
@ 2016-08-04  0:51             ` Carlos E. R.
  2016-08-04 11:34             ` Lista Unx
                               ` (2 subsequent siblings)
  4 siblings, 0 replies; 26+ messages in thread
From: Carlos E. R. @ 2016-08-04  0:51 UTC (permalink / raw)
  To: XFS mailing list

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 2016-08-04 01:15, Dave Chinner wrote:
> On Wed, Aug 03, 2016 at 03:34:58PM +0200, Carlos E. R. wrote:
>> On 2016-08-03 15:21, Martin Steigerwald wrote:

...

> Thoughts?

I have no objection to either staying or moving :-)

That is, I can live with the spam, and I can live with the move. You
do not need to care for people like me, we are just "users" ;-)

It is mostly the contributors you have to mind most.

I suppose that as log as you leave an autoresponder it will be Ok.

So go ahead when you wish :-)

- -- 
Cheers / Saludos,

		Carlos E. R.

  (from 13.1 x86_64 "Bottle" (Minas Tirith))
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iF4EAREIAAYFAleikaMACgkQja8UbcUWM1x4AgD/b5NpxNS0O1LFuYAlBpdvMG7R
Ikr5YoxtNqLT9EcT8YQA/1O9P1p977AEOSlHxbxP3YMB4uEsea8p2PCaGINCUrKr
=OIsJ
-----END PGP SIGNATURE-----

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Spam on this list
  2016-08-03 23:15           ` Spam on this list Dave Chinner
  2016-08-03 23:29             ` Darrick J. Wong
  2016-08-04  0:51             ` Carlos E. R.
@ 2016-08-04 11:34             ` Lista Unx
  2016-08-04 13:40             ` Troy McCorkell
  2016-08-04 15:49             ` Martin Steigerwald
  4 siblings, 0 replies; 26+ messages in thread
From: Lista Unx @ 2016-08-04 11:34 UTC (permalink / raw)
  To: Dave Chinner, Carlos E. R.; +Cc: XFS mail list


----- Original Message ----- 
From: "Dave Chinner" <david@fromorbit.com>
To: "Carlos E. R." <robin.listas@telefonica.net>
Cc: "XFS mail list" <xfs@oss.sgi.com>
Sent: Thursday, August 04, 2016 2:15 AM
Subject: Re: Spam on this list


> Yes, that is the fundamental issue - spam filtering is essentially
> controlled by SGI's internal infrastructure, which we have little
> option on.
>
> What it comes down to is whether we continue to use this list
> (xfs@oss.sgi.com) or whether we move to linux-xfs@vger.kernel.org
> so we get much more robust and up-to-date spam filtering. The issue
> with doing this is forcing everyone to resubscribe, and then
> capturing everything that is still sent to xfs@oss.sgi.com.
>
> That said, I'm seriously tempted right now just to say "we're moving
> to vger" and asking everyone to resubscribe to that list, and then
> making xfs@oss.sgi.com respond with "list moved to vger, please
> repost there". i.e. not even put a forwarding gateway in place.
>
> If we do that, then I'll also shut down all the XFS git trees on
> oss.sgi.com - I'll add commits to the them to say "go to
> kernel.org". I'll need to work something out for the tarball
> releases, but kernel.org does have functionality for that, too, so
> that may just be a small change of process on my end (i.e. use kup).
> Once that is done, we'll be running completely on community provided
> infrastructure....
>
> Thoughts?

I completely agree, is a good idea to move to a new place where more filters 
are already in place. It is a small change on user's side and benefits are 
for everyone. I cannot see any cons to not do it. It probably will be the 
best to keep also a message for newcommers and also for robots, that list 
has been moved to vger and new way to subscribe. 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Spam on this list
  2016-08-03 23:15           ` Spam on this list Dave Chinner
                               ` (2 preceding siblings ...)
  2016-08-04 11:34             ` Lista Unx
@ 2016-08-04 13:40             ` Troy McCorkell
  2016-08-04 15:49             ` Martin Steigerwald
  4 siblings, 0 replies; 26+ messages in thread
From: Troy McCorkell @ 2016-08-04 13:40 UTC (permalink / raw)
  To: Dave Chinner, Carlos E. R.; +Cc: XFS mail list

On Wed, Aug 03,2016 at 6:15PM Dave Chinner wrote:
>On Wed, Aug 03, 2016 at 03:34:58PM +0200, Carlos E. R. wrote:
>> On 2016-08-03 15:21, Martin Steigerwald wrote:
>>
>> > Lista, Carlos, I recommend: If you are concerned about spam on the list
>> > contact the listmasters of it. They are the ones that can address it globally
>> > for the list.
>>
>> Oh, I did, long ago. Still waiting.
>
>Yes, that is the fundamental issue - spam filtering is essentially
>controlled by SGI's internal infrastructure, which we have little
>option on.
>
>What it comes down to is whether we continue to use this list
>(xfs@oss.sgi.com) or whether we move to linux-xfs@vger.kernel.org
>so we get much more robust and up-to-date spam filtering. The issue
>with doing this is forcing everyone to resubscribe, and then
>capturing everything that is still sent to xfs@oss.sgi.com.
>
>That said, I'm seriously tempted right now just to say "we're moving
>to vger" and asking everyone to resubscribe to that list, and then
>making xfs@oss.sgi.com respond with "list moved to vger, please
>repost there". i.e. not even put a forwarding gateway in place.
>
>If we do that, then I'll also shut down all the XFS git trees on
>oss.sgi.com - I'll add commits to the them to say "go to
>kernel.org". I'll need to work something out for the tarball
>releases, but kernel.org does have functionality for that, too, so
>that may just be a small change of process on my end (i.e. use kup).
>Once that is done, we'll be running completely on community provided
>infrastructure....
>
>Thoughts?
>
>Cheers,
>
>Dave.
>--
>Dave Chinner
>david@fromorbit.com

Dave,

It's probably the best option to move the mailing list to vger.
Let me know what we can do to facilitate the move.

Thanks,
Troy

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Spam on this list
  2016-08-03 23:15           ` Spam on this list Dave Chinner
                               ` (3 preceding siblings ...)
  2016-08-04 13:40             ` Troy McCorkell
@ 2016-08-04 15:49             ` Martin Steigerwald
  2016-08-05  8:25               ` Carlos Eduardo Maiolino
  4 siblings, 1 reply; 26+ messages in thread
From: Martin Steigerwald @ 2016-08-04 15:49 UTC (permalink / raw)
  To: xfs; +Cc: Carlos E. R.

Am Donnerstag, 4. August 2016, 09:15:29 CEST schrieb Dave Chinner:
> On Wed, Aug 03, 2016 at 03:34:58PM +0200, Carlos E. R. wrote:
> > On 2016-08-03 15:21, Martin Steigerwald wrote:
> > > Lista, Carlos, I recommend: If you are concerned about spam on the list
> > > contact the listmasters of it. They are the ones that can address it
> > > globally for the list.
> > 
> > Oh, I did, long ago. Still waiting.
> 
> Yes, that is the fundamental issue - spam filtering is essentially
> controlled by SGI's internal infrastructure, which we have little
> option on.
> 
> What it comes down to is whether we continue to use this list
> (xfs@oss.sgi.com) or whether we move to linux-xfs@vger.kernel.org
> so we get much more robust and up-to-date spam filtering. The issue
> with doing this is forcing everyone to resubscribe, and then
> capturing everything that is still sent to xfs@oss.sgi.com.
> 
> That said, I'm seriously tempted right now just to say "we're moving
> to vger" and asking everyone to resubscribe to that list, and then
> making xfs@oss.sgi.com respond with "list moved to vger, please
> repost there". i.e. not even put a forwarding gateway in place.
> 
> If we do that, then I'll also shut down all the XFS git trees on
> oss.sgi.com - I'll add commits to the them to say "go to
> kernel.org". I'll need to work something out for the tarball
> releases, but kernel.org does have functionality for that, too, so
> that may just be a small change of process on my end (i.e. use kup).
> Once that is done, we'll be running completely on community provided
> infrastructure....
> 
> Thoughts?

Nice, so this time discussing spam on the list on the list may have a nice 
effect in the end.

I completely agree and have no issues with resubscribung there.

Thank you,
-- 
Martin

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Spam on this list
  2016-08-04 15:49             ` Martin Steigerwald
@ 2016-08-05  8:25               ` Carlos Eduardo Maiolino
  0 siblings, 0 replies; 26+ messages in thread
From: Carlos Eduardo Maiolino @ 2016-08-05  8:25 UTC (permalink / raw)
  To: Martin Steigerwald; +Cc: Carlos E. R., xfs


>> 
>> That said, I'm seriously tempted right now just to say "we're moving 
>> to vger" and asking everyone to resubscribe to that list, and then 
>> making xfs@oss.sgi.com respond with "list moved to vger, please 
>> repost there". i.e. not even put a forwarding gateway in place. 
>> 
>> If we do that, then I'll also shut down all the XFS git trees on 
>> oss.sgi.com - I'll add commits to the them to say "go to 
>> kernel.org". I'll need to work something out for the tarball 
>> releases, but kernel.org does have functionality for that, too, so 
>> that may just be a small change of process on my end (i.e. use kup). 
>> Once that is done, we'll be running completely on community provided 
>> infrastructure.... 
>> 
>> Thoughts? 

>Nice, so this time discussing spam on the list on the list may have a nice 
>effect in the end.
>
>I completely agree and have no issues with resubscribung there. 
>
>Thank you, 
>-- 
>Martin 

+1 here.

Honestly, I don't believe that moving to vger will be a big trouble for anyone other than the list maintainer who will need to set auto-respond messages.
And we have the advantage of moving it to a community provided infra as you said.

Let me know if you need any help with it, I'll be glad to help in this change



-- 
--Carlos

P.S. Martin, I apologize for the dup'ed e-mail, my MUA tricked me and I didn't copy xfs list on my previous reply..

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2016-08-05  8:25 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-29  9:01 partition 100% full No space left on device. looks like xfs is corrupted or a bug Lista Unx
2016-07-29 10:48 ` Carlos E. R.
2016-07-29 14:27   ` partition 100% full No space left on device. looks like xfs iscorrupted " Lista Unx
2016-07-29 14:03 ` partition 100% full No space left on device. looks like xfs is corrupted " Brian Foster
2016-07-29 14:37   ` partition 100% full No space left on device. looks like xfs iscorrupted " Lista Unx
2016-07-29 15:20     ` Brian Foster
2016-07-29 21:49 ` partition 100% full No space left on device. looks like xfs is corrupted " Eric Sandeen
2016-08-01 11:24   ` partition 100% full No space left on device. looks like xfs iscorrupted " Lista Unx
2016-07-29 23:35 ` partition 100% full No space left on device. looks like xfs is corrupted " Dave Chinner
2016-08-01 12:00   ` partition 100% full No space left on device. looks like xfs iscorrupted " Lista Unx
2016-08-01 12:23     ` Carlos E. R.
2016-08-02 17:34       ` partition 100% full No space left on device. looks like xfsiscorrupted " Lista Unx
2016-08-02 17:34       ` Lista Unx
2016-08-01 16:51     ` partition 100% full No space left on device. looks like xfs iscorrupted " Chris Murphy
2016-08-02 17:58       ` partition 100% full No space left on device. looks like xfsiscorrupted " Lista Unx
2016-08-02 19:11         ` Troy McCorkell
2016-08-03 12:59     ` Spam on this list [Was: Re: partition 100% full No space left on device. looks like xfs iscorrupted or a bug] Carlos E. R.
2016-08-03 13:21       ` Martin Steigerwald
2016-08-03 13:34         ` Carlos E. R.
2016-08-03 23:15           ` Spam on this list Dave Chinner
2016-08-03 23:29             ` Darrick J. Wong
2016-08-04  0:51             ` Carlos E. R.
2016-08-04 11:34             ` Lista Unx
2016-08-04 13:40             ` Troy McCorkell
2016-08-04 15:49             ` Martin Steigerwald
2016-08-05  8:25               ` Carlos Eduardo Maiolino

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.