All of lore.kernel.org
 help / color / mirror / Atom feed
* Fwd: Re: So, does btrfs check lowmem take days? weeks?
       [not found]               ` <20180709174818.wq2d4awmgasxgwad@merlins.org>
@ 2018-07-10  1:33                 ` Su Yue
  2018-07-10  1:33                   ` Su Yue
       [not found]                 ` <faba0923-8d1f-5270-ba03-ce9cc484e08a@gmx.com>
       [not found]                 ` <trinity-4546309e-d603-4d29-885a-e76da594f792-1531159860064@msvc-mesg-gmx021>
  2 siblings, 1 reply; 4+ messages in thread
From: Su Yue @ 2018-07-10  1:33 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Qu Wenruo

[FWD to linux-btrfs]
Thanks to Marc's patient of running and tests btrfsck lowmem mode
in recent days.
The FS has a large extent tree but luckily few are corrupted, they are
all fixed by special version. Reloc trees were cleaned too.
So the FS can be mounted with RW.

However, the remaining errors of extent data in file trees are 
unresloved, they are all about holes.
Since I'm not familiar with kernel code, not sure how serious those 
errors are and what result could be during write/read those wrong
items.

Marc also has some questions in the part forwarded, replies are
always welcome.

Error messages are showed in the last.


-------- Forwarded Message --------
Subject: Re: So, does btrfs check lowmem take days? weeks?
Date: Mon, 9 Jul 2018 10:48:18 -0700
From: Marc MERLIN <marc@merlins.org>
To: Su Yue <Damenly_Su@gmx.com>
CC: quwenruo.btrfs@gmx.com, Su Yue <suy.fnst@cn.fujitsu.com>

Success!
Well done Su, this is a huge improvement to the lowmem code. It went 
from days to less than 3 hours.

I'll paste the logs below.

Questions:
1) I assume I first need to delete a lot of snapshots. What is the limit 
in your opinion?
100? 150? other?

2) my filesystem is somewhat misbalanced. Which balance options do you 
think are safe to use?

3) Should I start a scrub now (takes about 1 day) or anything else to
check that the filesystem is hopefully not damaged anymore?

4) should btrfs check reset the corrupt counter?
bdev /dev/mapper/dshelf2 errs: wr 0, rd 0, flush 0, corrupt 2, gen 0
for now, should I reset it manually?

Thanks,
Marc


gargamel:/var/local/src/btrfs-progs.sy# ./btrfsck --mode=lowmem -q 
--repair /dev/mapper/dshelf2
enabling repair mode
WARNING: low-memory mode repair support is only partial
Checking filesystem on /dev/mapper/dshelf2
UUID: 0f1a0c9f-4e54-4fa7-8736-fd50818ff73d
Created new chunk [18460145811456 1073741824]
Add one extent data backref [84302495744 69632]
Add one extent data backref [84302495744 69632]
Add one extent data backref [125712527360 12214272]
Add one extent data backref [125730848768 5111808]
Add one extent data backref [125730848768 5111808]
Add one extent data backref [125736914944 6037504]
Add one extent data backref [125736914944 6037504]
Add one extent data backref [129952120832 20242432]
Add one extent data backref [129952120832 20242432]
Add one extent data backref [134925357056 11829248]
Add one extent data backref [134925357056 11829248]
Add one extent data backref [147895111680 12345344]
Add one extent data backref [147895111680 12345344]
Add one extent data backref [150850146304 17522688]
Add one extent data backref [156909494272 55320576]
Add one extent data backref [156909494272 55320576]
good luck!
found 0 bytes used, no error found
total csum bytes: 0
total tree bytes: 0
total fs tree bytes: 0
total extent tree bytes: 0
btree space waste bytes: 0
file data blocks allocated: 0
  referenced 0
gargamel:/var/local/src/btrfs-progs.sy# ./btrfsck --mode=lowmem -q 
/dev/mapper/dshelf2
Checking filesystem on /dev/mapper/dshelf2
UUID: 0f1a0c9f-4e54-4fa7-8736-fd50818ff73d
good luck!
found 251650048 bytes used, no error found
total csum bytes: 0
total tree bytes: 0
total fs tree bytes: 0
total extent tree bytes: 0
btree space waste bytes: 0
file data blocks allocated: 0
  referenced 0
gargamel:/var/local/src/btrfs-progs.sy# ./btrfsck -c /dev/mapper/dshelf2
Checking filesystem on /dev/mapper/dshelf2
UUID: 0f1a0c9f-4e54-4fa7-8736-fd50818ff73d
found 0 bytes used, no error found
total csum bytes: 0
total tree bytes: 0
total fs tree bytes: 0
total extent tree bytes: 0
btree space waste bytes: 0
file data blocks allocated: 0
  referenced 0
gargamel:/var/local/src/btrfs-progs.sy# mount /dev/mapper/dshelf2 /mnt/mnt
[671283.314558] BTRFS info (device dm-2): disk space caching is enabled
[671283.334226] BTRFS info (device dm-2): has skinny extents
[671285.191740] BTRFS info (device dm-2): bdev /dev/mapper/dshelf2 errs: 
wr 0, rd 0, flush 0, corrupt 2, gen 0
[671395.371313] BTRFS info (device dm-2): enabling ssd optimizations
[671400.884013] BTRFS info (device dm-2): checking UUID tree
(hung about 2-3mn but worked eventually)

gargamel:/mnt/mnt# btrfs fi show .
Label: 'dshelf2'  uuid: 0f1a0c9f-4e54-4fa7-8736-fd50818ff73d
         Total devices 1 FS bytes used 12.59TiB
         devid    1 size 14.55TiB used 13.81TiB path /dev/mapper/dshelf2

gargamel:/mnt/mnt# btrfs fi df .
Data, single: total=13.57TiB, used=12.48TiB
System, DUP: total=32.00MiB, used=1.55MiB
Metadata, DUP: total=124.50GiB, used=116.92GiB
Metadata, single: total=216.00MiB, used=0.00B
GlobalReserve, single: total=512.00MiB, used=42.62MiB

gargamel:/mnt/mnt# btrfs subvolume list . | wc -l
270


-- 
"A mouse is a device used to point at the xterm you want to type in" - 
A.S.R.
Microsoft is to operating systems ....
                                       .... what McDonalds is to gourmet 
cooking
Home page: http://marc.merlins.org/                       | PGP 
7F55D5F27AAF9D08


!Error messages bellow:

==================================================================
btrfsck --mode=lowmem --repair /dev/mapper/dshelf2
enabling repair mode
WARNING: low-memory mode repair support is only partial
Checking filesystem on /dev/mapper/dshelf2
UUID: 0f1a0c9f-4e54-4fa7-8736-fd50818ff73d
ERROR: extent[84302495744, 69632] referencer count mismatch (root: 
21872, owner: 374857, offset: 3407872) wanted: 2, have: 3
ERROR: extent[84302495744, 69632] referencer count mismatch (root: 
22911, owner: 374857, offset: 3407872) wanted: 2, have: 4
Delete backref in extent [84302495744 69632]
ERROR: extent[125712527360, 12214272] referencer count mismatch (root: 
21872, owner: 374857, offset: 114540544) wanted: 180, have: 181
Delete backref in extent [125712527360 12214272]
ERROR: extent[125730848768, 5111808] referencer count mismatch (root: 
21872, owner: 374857, offset: 126754816) wanted: 67, have: 68
Delete backref in extent [125730848768 5111808]
ERROR: extent[125730848768, 5111808] referencer count mismatch (root: 
22911, owner: 374857, offset: 126754816) wanted: 67, have: 115
Delete backref in extent [125730848768 5111808]
ERROR: extent[125736914944, 6037504] referencer count mismatch (root: 
21872, owner: 374857, offset: 131866624) wanted: 114, have: 115
ERROR: extent[125736914944, 6037504] referencer count mismatch (root: 
22911, owner: 374857, offset: 131866624) wanted: 114, have: 143
Delete backref in extent [125736914944 6037504]
ERROR: extent[129952120832, 20242432] referencer count mismatch (root: 
21872, owner: 374857, offset: 148234240) wanted: 301, have: 302
Delete backref in extent [129952120832 20242432]
ERROR: extent[129952120832, 20242432] referencer count mismatch (root: 
22911, owner: 374857, offset: 148234240) wanted: 355, have: 433
Delete backref in extent [129952120832 20242432]
ERROR: extent[134925357056, 11829248] referencer count mismatch (root: 
21872, owner: 374857, offset: 180371456) wanted: 160, have: 161
Delete backref in extent [134925357056 11829248]
ERROR: extent[134925357056, 11829248] referencer count mismatch (root: 
22911, owner: 374857, offset: 180371456) wanted: 161, have: 240
Delete backref in extent [134925357056 11829248]
ERROR: extent[147895111680, 12345344] referencer count mismatch (root: 
21872, owner: 374857, offset: 192200704) wanted: 169, have: 170
Delete backref in extent [147895111680 12345344]
ERROR: extent[147895111680, 12345344] referencer count mismatch (root: 
22911, owner: 374857, offset: 192200704) wanted: 171, have: 251
Delete backref in extent [147895111680 12345344]
ERROR: extent[150850146304, 17522688] referencer count mismatch (root: 
21872, owner: 374857, offset: 217653248) wanted: 347, have: 348
Delete backref in extent [150850146304 17522688]
ERROR: extent[156909494272, 55320576] referencer count mismatch (root: 
22911, owner: 374857, offset: 235175936) wanted: 1, have: 1449
Deleted root 2 item[156909494272, 178, 5476627808561673095]
ERROR: extent[156909494272, 55320576] referencer count mismatch (root: 
21872, owner: 374857, offset: 235175936) wanted: 1, have: 556
Deleted root 2 item[156909494272, 178, 7338474132555182983]
ERROR: errors found in extent allocation tree or chunk allocation
cache and super generation don't match, space cache will be invalidated
checking fs roots
Add a hole [4033, 63] in inode [25937109]
ERROR: failed to add hole [8129, 63] in inode [25937109]
ERROR: root 17592 EXTENT_DATA[25937109 8192] gap exists, expected: 
EXTENT_DATA[25937109 8129]
ERROR: failed to add hole [20417, 63] in inode [25937109]
ERROR: root 17592 EXTENT_DATA[25937109 20480] gap exists, expected: 
EXTENT_DATA[25937109 20417]
Add a hole [3349, 747] in inode [25937493]
ERROR: failed to add hole [7445, 747] in inode [25937493]
ERROR: root 17592 EXTENT_DATA[25937493 8192] gap exists, expected: 
EXTENT_DATA[25937493 7445]
ERROR: failed to add hole [11541, 747] in inode [25937493]
ERROR: root 17592 EXTENT_DATA[25937493 12288] gap exists, expected: 
EXTENT_DATA[25937493 11541]
Add a hole [4091, 5] in inode [25941335]
ERROR: failed to add hole [8187, 5] in inode [25941335]
ERROR: root 17592 EXTENT_DATA[25941335 8192] gap exists, expected: 
EXTENT_DATA[25941335 8187]
Can't find file name for inode 0, use 0 instead
ERROR: failed to link the inode 0 to lost+found dir: No such file or 
directory
ERROR: failed to move file '0' to 'lost+found' dir
ERROR: fail to repair nlink of inode 17592 root 0 name 0 filetype 0
ERROR: root 17592 INODE[0] nlink(1061879808) not equal to inode_refs(0)
ERROR: failed to set nbytes in inode 0 root 17592
ERROR: root 17592 INODE[0] nbytes 6640973824 not equal to extent_size 0
...
===============================================================




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Fwd: Re: So, does btrfs check lowmem take days? weeks?
  2018-07-10  1:33                 ` Fwd: Re: So, does btrfs check lowmem take days? weeks? Su Yue
@ 2018-07-10  1:33                   ` Su Yue
  0 siblings, 0 replies; 4+ messages in thread
From: Su Yue @ 2018-07-10  1:33 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Qu Wenruo, Marc MERLIN

Forgot to CC Marc.

On 07/10/2018 09:33 AM, Su Yue wrote:
> [FWD to linux-btrfs]
> Thanks to Marc's patient of running and tests btrfsck lowmem mode
> in recent days.
> The FS has a large extent tree but luckily few are corrupted, they are
> all fixed by special version. Reloc trees were cleaned too.
> So the FS can be mounted with RW.
> 
> However, the remaining errors of extent data in file trees are 
> unresloved, they are all about holes.
> Since I'm not familiar with kernel code, not sure how serious those 
> errors are and what result could be during write/read those wrong
> items.
> 
> Marc also has some questions in the part forwarded, replies are
> always welcome.
> 
> Error messages are showed in the last.
> 
> 
> -------- Forwarded Message --------
> Subject: Re: So, does btrfs check lowmem take days? weeks?
> Date: Mon, 9 Jul 2018 10:48:18 -0700
> From: Marc MERLIN <marc@merlins.org>
> To: Su Yue <Damenly_Su@gmx.com>
> CC: quwenruo.btrfs@gmx.com, Su Yue <suy.fnst@cn.fujitsu.com>
> 
> Success!
> Well done Su, this is a huge improvement to the lowmem code. It went 
> from days to less than 3 hours.
> 
> I'll paste the logs below.
> 
> Questions:
> 1) I assume I first need to delete a lot of snapshots. What is the limit 
> in your opinion?
> 100? 150? other?
> 
> 2) my filesystem is somewhat misbalanced. Which balance options do you 
> think are safe to use?
> 
> 3) Should I start a scrub now (takes about 1 day) or anything else to
> check that the filesystem is hopefully not damaged anymore?
> 
> 4) should btrfs check reset the corrupt counter?
> bdev /dev/mapper/dshelf2 errs: wr 0, rd 0, flush 0, corrupt 2, gen 0
> for now, should I reset it manually?
> 
> Thanks,
> Marc
> 
> 
> gargamel:/var/local/src/btrfs-progs.sy# ./btrfsck --mode=lowmem -q 
> --repair /dev/mapper/dshelf2
> enabling repair mode
> WARNING: low-memory mode repair support is only partial
> Checking filesystem on /dev/mapper/dshelf2
> UUID: 0f1a0c9f-4e54-4fa7-8736-fd50818ff73d
> Created new chunk [18460145811456 1073741824]
> Add one extent data backref [84302495744 69632]
> Add one extent data backref [84302495744 69632]
> Add one extent data backref [125712527360 12214272]
> Add one extent data backref [125730848768 5111808]
> Add one extent data backref [125730848768 5111808]
> Add one extent data backref [125736914944 6037504]
> Add one extent data backref [125736914944 6037504]
> Add one extent data backref [129952120832 20242432]
> Add one extent data backref [129952120832 20242432]
> Add one extent data backref [134925357056 11829248]
> Add one extent data backref [134925357056 11829248]
> Add one extent data backref [147895111680 12345344]
> Add one extent data backref [147895111680 12345344]
> Add one extent data backref [150850146304 17522688]
> Add one extent data backref [156909494272 55320576]
> Add one extent data backref [156909494272 55320576]
> good luck!
> found 0 bytes used, no error found
> total csum bytes: 0
> total tree bytes: 0
> total fs tree bytes: 0
> total extent tree bytes: 0
> btree space waste bytes: 0
> file data blocks allocated: 0
>   referenced 0
> gargamel:/var/local/src/btrfs-progs.sy# ./btrfsck --mode=lowmem -q 
> /dev/mapper/dshelf2
> Checking filesystem on /dev/mapper/dshelf2
> UUID: 0f1a0c9f-4e54-4fa7-8736-fd50818ff73d
> good luck!
> found 251650048 bytes used, no error found
> total csum bytes: 0
> total tree bytes: 0
> total fs tree bytes: 0
> total extent tree bytes: 0
> btree space waste bytes: 0
> file data blocks allocated: 0
>   referenced 0
> gargamel:/var/local/src/btrfs-progs.sy# ./btrfsck -c /dev/mapper/dshelf2
> Checking filesystem on /dev/mapper/dshelf2
> UUID: 0f1a0c9f-4e54-4fa7-8736-fd50818ff73d
> found 0 bytes used, no error found
> total csum bytes: 0
> total tree bytes: 0
> total fs tree bytes: 0
> total extent tree bytes: 0
> btree space waste bytes: 0
> file data blocks allocated: 0
>   referenced 0
> gargamel:/var/local/src/btrfs-progs.sy# mount /dev/mapper/dshelf2 /mnt/mnt
> [671283.314558] BTRFS info (device dm-2): disk space caching is enabled
> [671283.334226] BTRFS info (device dm-2): has skinny extents
> [671285.191740] BTRFS info (device dm-2): bdev /dev/mapper/dshelf2 errs: 
> wr 0, rd 0, flush 0, corrupt 2, gen 0
> [671395.371313] BTRFS info (device dm-2): enabling ssd optimizations
> [671400.884013] BTRFS info (device dm-2): checking UUID tree
> (hung about 2-3mn but worked eventually)
> 
> gargamel:/mnt/mnt# btrfs fi show .
> Label: 'dshelf2'  uuid: 0f1a0c9f-4e54-4fa7-8736-fd50818ff73d
>          Total devices 1 FS bytes used 12.59TiB
>          devid    1 size 14.55TiB used 13.81TiB path /dev/mapper/dshelf2
> 
> gargamel:/mnt/mnt# btrfs fi df .
> Data, single: total=13.57TiB, used=12.48TiB
> System, DUP: total=32.00MiB, used=1.55MiB
> Metadata, DUP: total=124.50GiB, used=116.92GiB
> Metadata, single: total=216.00MiB, used=0.00B
> GlobalReserve, single: total=512.00MiB, used=42.62MiB
> 
> gargamel:/mnt/mnt# btrfs subvolume list . | wc -l
> 270
> 
> 



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: So, does btrfs check lowmem take days? weeks?
       [not found]                 ` <faba0923-8d1f-5270-ba03-ce9cc484e08a@gmx.com>
@ 2018-07-10  4:00                   ` Marc MERLIN
  0 siblings, 0 replies; 4+ messages in thread
From: Marc MERLIN @ 2018-07-10  4:00 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs; +Cc: Su Yue, Su Yue

To fill in for the spectators on the list :)
Su gave me a modified version of btrfsck lowmem that was able to clean
most of my filesystem.
It's not a general case solution since it had some hardcoding specific
to my filesystem problems, but still a great success.
Email quoted below, along with responses to Qu

On Tue, Jul 10, 2018 at 09:09:33AM +0800, Qu Wenruo wrote:
> 
> 
> On 2018年07月10日 01:48, Marc MERLIN wrote:
> > Success!
> > Well done Su, this is a huge improvement to the lowmem code. It went from days to less than 3 hours.
> 
> Awesome work!
> 
> > I'll paste the logs below.
> > 
> > Questions:
> > 1) I assume I first need to delete a lot of snapshots. What is the limit in your opinion?
> > 100? 150? other?
> 
> My personal recommendation is just 20. Not 150, not even 100.
 
I see. Then, I may be forced to recreate multiple filesystems anyway.
I have about 25 btrfs send/receive relationships and I have around 10
historical snapshots for each.

In the future, can't we segment extents/snapshots per subvolume, making
subvolumes mini filesystems within the bigger filesystem?

> But snapshot deletion will take time (and it's delayed, you won't know
> if something wrong happened just after "btrfs subv delete") and even
> require a healthy extent tree.
> If all extent tree errors are just false alert, that should not be a big
> problem at all.
> 
> > 
> > 2) my filesystem is somewhat misbalanced. Which balance options do you think are safe to use?
> 
> I would recommend to manually check extent tree for BLOCK_GROUP_ITEM,
> which will tell how big a block group is and how many space is used.
> And gives you an idea on which block group can be relocated.
> Then use vrange= to specify exact block group to relocation.
> 
> One example would be:
> 
> # btrfs ins dump-tree -t extent <dev> | grep -A1 BLOCK_GROUP_ITEM |\
>   tee block_group_dump
> 
> Then the output contains:
> 	item 1 key (13631488 BLOCK_GROUP_ITEM 8388608) itemoff 16206 itemsize 24
> 		block group used 262144 chunk_objectid 256 flags DATA
> 
> The "13631488" is the bytenr of the block group.
> The "8388608" is the length of the block group.
> The "262144" is the used bytes of the block group.
> 
> The less used space the higher priority it should be relocated. (and
> faster to relocate).
> You could write a small script to do it, or there should be some tool to
> do the calculation for you.
 
I usually use something simpler:
Label: 'btrfs_boot'  uuid: e4c1daa8-9c39-4a59-b0a9-86297d397f3b
	Total devices 1 FS bytes used 30.19GiB
	devid    1 size 79.93GiB used 78.01GiB path /dev/mapper/cryptroot

This is bad, I have 30GB of data, but 78 out of 80GB of structures full.
This is bad news and recommends a balance, correct?
If so, I always struggle as to what value I should give to dusage and
musage...

> And only relocate one block group each time, to avoid possible problem.
> 
> The last but not the least, it's highly recommend to do the relocation
> only after unused snapshots are completely deleted.
> (Or it would be super super slow to relocate)

Thank you for the advise. Hopefully this hepls someone else too, and
maybe someone can write some reallocate helper tool if I don't have the
time to do it myself.

> > 3) Should I start a scrub now (takes about 1 day) or anything else to
> > check that the filesystem is hopefully not damaged anymore?
> 
> I would normally recommend to use btrfs check, but neither mode really
> works here.
> And scrub only checks csum, doesn't check the internal cross reference
> (like content of extent tree).
> 
> Maybe Su could skip the whole extent tree check and let lowmem to check
> the fs tree only, with --check-data-csum it should be a better work than
>  scrub.

I will wait to hear back from Su, but I think the current situation is
that I still have some problems on my FS, they are just
1) not important enough to block mount rw (now it works again)
2) currently ignored by the modified btrfsck I have, but would cause
problems if I used real btrfsck.

Correct?

> > 
> > 4) should btrfs check reset the corrupt counter?
> > bdev /dev/mapper/dshelf2 errs: wr 0, rd 0, flush 0, corrupt 2, gen 0
> > for now, should I reset it manually?
> 
> It could be pretty easy to implement if not already implemented.

Seems like it's not given that Su's btrfsck --repair ran to completion
and I still have corrupt set to '2' :)

Marc
-- 
"A mouse is a device used to point at the xterm you want to type in" - A.S.R.
Microsoft is to operating systems ....
                                      .... what McDonalds is to gourmet cooking
Home page: http://marc.merlins.org/                       | PGP 7F55D5F27AAF9D08

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: So, does btrfs check lowmem take days? weeks?
       [not found]                         ` <58b36f04-3094-7de0-8d5e-e06e280aac00@cn.fujitsu.com>
@ 2018-07-11  1:08                           ` Su Yue
  0 siblings, 0 replies; 4+ messages in thread
From: Su Yue @ 2018-07-11  1:08 UTC (permalink / raw)
  To: Marc MERLIN; +Cc: Su Yue, quwenruo.btrfs, linux-btrfs



On 07/10/2018 06:53 PM, Su Yue wrote:
> 
> 
> On 07/10/2018 12:10 PM, Marc MERLIN wrote:
>> On Tue, Jul 10, 2018 at 08:56:15AM +0800, Su Yue wrote:
>>>> I'm just not clear if my FS is still damaged and btrfsck was just 
>>>> hacked to
>>>> ignore the damage it can't deal with, or whether it was able to repair
>>>> things to a consistent state.
>>>> The fact that I can mount read/write with no errors seems like a 
>>>> good sign.
>>>>
>>> Yes, a good sign. Since extent tree is fixed, the errors left are in
>>> other trees. The most bad result I can see is that writes of some 
>>> files will
>>> reports IO Error. This is the cost of RW.
>>
>> Ok, so we agreed that btrfs scrub won't find this, so ultimately I
>> should run normal btrfsck --repair without the special block skip code
>> you added?
>>
> Yes. Here is the normal btrfsck which skips extent tree to save time.
> And I fixed a bug which is mentioned in other mail by Qu.
> I have no time to add progress of fs trees check though.
> https://github.com/Damenly/btrfs-progs/tree/tmp1
> 
> It may take a long time to fix errors unresolved.
> #./btrfsck -e 2 --mode=lowmem --repair $dev
> '-e' means to skip extent tree.
> Here is the mail. Running above command should sloves errors.
If no other errors occurs, your FS will be good.

Please not run repair of master branch, please :(.
It will ruin all things we did in recent days.

Thanks,
Su
> Thanks
> Su
> 
>> Since I can mount the filesystem read/write though, I can probably
>> delete a lot of snapshots to help the next fsck to run.
>> I assume the number of snapshots also affects the amount of memory taken
>> by regular fsck, so maybe if I delete enough of them regular fsck
>> --repair will work again?
>>
>> Thanks,
>> Marc
>>



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-07-11  1:03 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <f9bc21d6-fdc3-ca3a-793f-6fe574c7b8c6@cn.fujitsu.com>
     [not found] ` <20180709031054.qfg4x5yzcl4rao2k@merlins.org>
     [not found]   ` <20180709031501.iutlokfvodtkkfhe@merlins.org>
     [not found]     ` <17cc0cc1-b64d-4daa-18b5-bb2da3736ea1@cn.fujitsu.com>
     [not found]       ` <20180709034058.wjavwjdyixx6smbw@merlins.org>
     [not found]         ` <29302c14-e277-2c69-ac08-c4722c2b18aa@cn.fujitsu.com>
     [not found]           ` <20180709155306.zr3p2kolnanvkpny@merlins.org>
     [not found]             ` <trinity-4aae1c42-a85e-4c73-a30e-8b0d0be05e86-1531152875875@msvc-mesg-gmx023>
     [not found]               ` <20180709174818.wq2d4awmgasxgwad@merlins.org>
2018-07-10  1:33                 ` Fwd: Re: So, does btrfs check lowmem take days? weeks? Su Yue
2018-07-10  1:33                   ` Su Yue
     [not found]                 ` <faba0923-8d1f-5270-ba03-ce9cc484e08a@gmx.com>
2018-07-10  4:00                   ` Marc MERLIN
     [not found]                 ` <trinity-4546309e-d603-4d29-885a-e76da594f792-1531159860064@msvc-mesg-gmx021>
     [not found]                   ` <20180709222218.GP9859@merlins.org>
     [not found]                     ` <440b7d12-3504-8b4f-5aa4-b1f39f549730@cn.fujitsu.com>
     [not found]                       ` <20180710041037.4ynitx3flubtwtvc@merlins.org>
     [not found]                         ` <58b36f04-3094-7de0-8d5e-e06e280aac00@cn.fujitsu.com>
2018-07-11  1:08                           ` Su Yue

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.