All of lore.kernel.org
 help / color / mirror / Atom feed
* fstab autodegrag with 5.10 & 5.15 kernels, Debian?
@ 2022-01-27 15:25 piorunz
  2022-01-27 21:14 ` Chris Murphy
                   ` (3 more replies)
  0 siblings, 4 replies; 21+ messages in thread
From: piorunz @ 2022-01-27 15:25 UTC (permalink / raw)
  To: linux-btrfs

Hi all,

Is it safe & recommended to run autodefrag mount option in fstab?
I am considering two machines here, normal desktop which has Btrfs as
/home, and server with VM and other databases also btrfs /home. Both
Btrfs RAID10 types. Both are heavily fragmented. I never defragmented
them, in fact. I run Debian 11 on server (kernel 5.10) and Debian
Testing (kernel 5.15) on desktop.

Running manual defrag on server machine, like:
sudo btrfs filesystem defrag -v -t4G -r /home
takes ages and can cause 120 second timeout kernel error in dmesg due to
service timeouts. I prefer to autodefrag gradually, overtime, mount
option seems to be good for that.

My current fstab mounting:

noatime,space_cache=v2,compress-force=zstd:3 0 2

Will autodefrag break COW files? Like I copy paste a file and I save
space, but defrag with destroy this space saving?
Also, will autodefrag compress files automatically, as mount option
enforces (compress-force=zstd:3)?

Any suggestions welcome.


--
With kindest regards, Piotr.

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org/
⠈⠳⣄⠀⠀⠀⠀

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-27 15:25 fstab autodegrag with 5.10 & 5.15 kernels, Debian? piorunz
@ 2022-01-27 21:14 ` Chris Murphy
  2022-01-27 22:45 ` Qu Wenruo
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 21+ messages in thread
From: Chris Murphy @ 2022-01-27 21:14 UTC (permalink / raw)
  To: piorunz; +Cc: Btrfs BTRFS

On Thu, Jan 27, 2022 at 8:25 AM piorunz <piorunz@gmx.com> wrote:
>
> Hi all,
>
> Is it safe & recommended to run autodefrag mount option in fstab?

Yes. But it's intended for the desktop use case, with small database
files that tend to get heavily fragmented. It's not intended for
larger or active databases, or VMs, where the write pattern will
trigger autodefrag way too often. And autodefrag works by just
dirtying the entire file, causing the whole file to be submitted
through the normal write path. i.e. it's basically the same as just
duplicating the file and deleting the original. Therefore some
workloads will just trigger a ton of excessive writes, and slow
everything down. Hence, it's mainly for the desktop use case.


> I am considering two machines here, normal desktop which has Btrfs as
> /home, and server with VM and other databases also btrfs /home.

You probably don't want to use autodefrag. Instead you can configure
btrfsmaintenance to do a scheduled defragment on specific directories.


>Both
> Btrfs RAID10 types. Both are heavily fragmented. I never defragmented
> them, in fact. I run Debian 11 on server (kernel 5.10) and Debian
> Testing (kernel 5.15) on desktop.
>
> Running manual defrag on server machine, like:
> sudo btrfs filesystem defrag -v -t4G -r /home
> takes ages and can cause 120 second timeout kernel error in dmesg due to
> service timeouts. I prefer to autodefrag gradually, overtime, mount
> option seems to be good for that.

For VM images you could use a target length of 1-4 MiB instead of the
default 32 MiB. Is this on a spinning hard drive or SSD? Defragment is
more important as fragmentation leads to high latency.


> My current fstab mounting:
>
> noatime,space_cache=v2,compress-force=zstd:3 0 2

In this case you don't want the target extent size to be larger than
128 KiB. Compression extent size is up to 128 KiB. If you use a target
size bigger than this, it's not actually going to do much
defragmenting, it'll just submit a ton of extents to be rewritten
elsewhere and take a long time.


>
> Will autodefrag break COW files?

It's not so much that it breaks them as much as it'll get triggered
constantly while the file is in use - and performance will get worse.

Also, defragment is not reflink/snapshot aware. So any directories
you're regularly defragmenting you might want to exclude from
snapshots. The autodefrag use case (web browser type workloads) are
small enough files that duplicating the extents upon defragmenting
them isn't a big problem like it is with large files suddenly no
longer being deduped (via relink copy or snapshot).



-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-27 15:25 fstab autodegrag with 5.10 & 5.15 kernels, Debian? piorunz
  2022-01-27 21:14 ` Chris Murphy
@ 2022-01-27 22:45 ` Qu Wenruo
  2022-01-28 11:55 ` Kai Krakow
  2022-01-29  1:33 ` piorunz
  3 siblings, 0 replies; 21+ messages in thread
From: Qu Wenruo @ 2022-01-27 22:45 UTC (permalink / raw)
  To: piorunz, linux-btrfs



On 2022/1/27 23:25, piorunz wrote:
> Hi all,
>
> Is it safe & recommended to run autodefrag mount option in fstab?
> I am considering two machines here, normal desktop which has Btrfs as
> /home, and server with VM and other databases also btrfs /home. Both
> Btrfs RAID10 types. Both are heavily fragmented. I never defragmented
> them, in fact. I run Debian 11 on server (kernel 5.10) and Debian
> Testing (kernel 5.15) on desktop.

It's very embarrassing to admit, that defrag behavior in btrfs has some
bugs, no only in the latest rework in v5.16, but also in older kernels.

>
> Running manual defrag on server machine, like:
> sudo btrfs filesystem defrag -v -t4G -r /home

-t4G is too large. The max extent size in btrfs is only 128M.

Thus it means all extents will be re-written unconditionally.

> takes ages and can cause 120 second timeout kernel error in dmesg due to
> service timeouts. I prefer to autodefrag gradually, overtime, mount
> option seems to be good for that.

The 120s timeout is a bug we need to address. I checked the current code
and v5.15 code, unfortunately it seems we didn't call cond_resched(),
which causes the 120s timeout.

Now let's talk about autodefrag.

In v5.15 (and stable v5.10), there is a hidden bug in autodefrag only
branch, that it will not defrag a lot of extents.

Thus autodefrag may not help, while only manual defrag would do the best
for now.

>
> My current fstab mounting:
>
> noatime,space_cache=v2,compress-force=zstd:3 0 2
>
> Will autodefrag break COW files? Like I copy paste a file and I save
> space, but defrag with destroy this space saving?

As mentioned in the man page, defrag (no matter auto or not) will break
reflink and snapshot, thus increase the space usage.

> Also, will autodefrag compress files automatically, as mount option
> enforces (compress-force=zstd:3)?

If you're using compression, then the max extent size is only 128K.
(Which is another cause of fragments)

Defrag can handle it, but you may want to modify the extent size even
smaller for defrag subcommand.

Thanks,
Qu

>
> Any suggestions welcome.
>
>
> --
> With kindest regards, Piotr.
>
> ⢀⣴⠾⠻⢶⣦⠀
> ⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
> ⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org/
> ⠈⠳⣄⠀⠀⠀⠀

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-27 15:25 fstab autodegrag with 5.10 & 5.15 kernels, Debian? piorunz
  2022-01-27 21:14 ` Chris Murphy
  2022-01-27 22:45 ` Qu Wenruo
@ 2022-01-28 11:55 ` Kai Krakow
  2022-01-28 12:14   ` piorunz
                     ` (3 more replies)
  2022-01-29  1:33 ` piorunz
  3 siblings, 4 replies; 21+ messages in thread
From: Kai Krakow @ 2022-01-28 11:55 UTC (permalink / raw)
  To: piorunz; +Cc: linux-btrfs

Hi!

Am Fr., 28. Jan. 2022 um 08:51 Uhr schrieb piorunz <piorunz@gmx.com>:

> Is it safe & recommended to run autodefrag mount option in fstab?

I've tried autodefrag a few times, and usually it caused btrfs to
explode under some loads (usually databases and VMs), ending in
invalid tree states, and after reboot the FS was unmountable. This may
have changed meanwhile (last time I tried was in the 5.10 series).
YMMV. Run and test your backups.


> I am considering two machines here, normal desktop which has Btrfs as
> /home, and server with VM and other databases also btrfs /home. Both
> Btrfs RAID10 types. Both are heavily fragmented. I never defragmented
> them, in fact. I run Debian 11 on server (kernel 5.10) and Debian
> Testing (kernel 5.15) on desktop.

Database and VM workloads are not well suited for btrfs-cow. I'd
consider using `chattr +C` on the directories storing such data, then
backup the contents, purge the directory empty, and restore the
contents, thus properly recreating the files in nocow mode. This
allows the databases and VMs to write data in-place. You're losing
transactional guarantees and checksums but at least for databases,
this is probably better left to the database itself anyways. For VMs
it depends, usually the embedded VM filesystem running in the images
should detect errors properly. That said, qemu qcow2 works very well
for me even with cow but I disabled compression (`chattr +m`) for the
images directory ("+m" is supported by recent chattr versions).


> Running manual defrag on server machine, like:
> sudo btrfs filesystem defrag -v -t4G -r /home
> takes ages and can cause 120 second timeout kernel error in dmesg due to
> service timeouts. I prefer to autodefrag gradually, overtime, mount
> option seems to be good for that.

This is probably the worst scenario you can create: forcing
compression forces extents to be no bigger than 128k, which in turn
increases IO overhead, and encourages fragmentation a lot. Since you
are forcing compression, setting a target size of 4G probably does
nothing, your extents will end up with 128k size.

I also found that depending on your workload, RAID10 may not be
beneficial at all because IO will always engage all spindles. In a
multi-process environment, a non-striping mode may be better (e.g.
RAID1). The high fragmentation would emphasize this bottleneck a lot.


> My current fstab mounting:
>
> noatime,space_cache=v2,compress-force=zstd:3 0 2
>
> Will autodefrag break COW files? Like I copy paste a file and I save
> space, but defrag with destroy this space saving?

Yes, it will. You could run the bees daemon instead to recombine
duplicate extents. It usually gives better space savings than forcing
compression. Using forced compression is probably only useful for
archive storage, or when every single byte counts.


> Also, will autodefrag compress files automatically, as mount option
> enforces (compress-force=zstd:3)?

It should, but I never tried. Compression is usually only skipped for
very small extents (when it wouldn't save a block), or for inline
extents. If you run without forced compression, a heuristic is used
for whether compressing an extent.


Regards,
Kai

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-28 11:55 ` Kai Krakow
@ 2022-01-28 12:14   ` piorunz
  2022-01-28 12:42     ` Qu Wenruo
  2022-01-28 13:00   ` Qu Wenruo
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 21+ messages in thread
From: piorunz @ 2022-01-28 12:14 UTC (permalink / raw)
  To: Kai Krakow; +Cc: linux-btrfs

Hi all,

Chris, Qu, Kai, thanks for all your replies. :)

As I mentioned, I never considered defragmenting before, so I am not
experienced in this matter, but now I decided to tackle this problem a
bit more seriously.
So yes, I run /home RAID10 on 4 HDDs and I don't want to change it at
this time. It's running quite fine. I'd love to run RAID 5 or 6 but I
know this functionality is not ready.
HDDs are 2TB each in my server, and 500GB each in desktop.

So, I see that solution I found while searching internet (defrag with
-t4G) is not suitable for compressed Btrfs.
Also, autodefrag will not work in my case. I use 5.10 and 5.15 kernels.

So, I should manually run:
sudo btrfs filesystem defrag -v -t128K -r /home
That will be more suitable, right?

I know performance can be improved for databases or VMs I run (6x
Windows running 24/7), but I don't want to complicate systems more than
they already are, so I will skip manual attr changes at this point, let
me try one thing at a time. :)

If manual defragging with -t128K is correct, I can run that regularly,
please let me know if that would be your consensus guys. Thank you.

On 28/01/2022 11:55, Kai Krakow wrote:
> Hi!
>
> Am Fr., 28. Jan. 2022 um 08:51 Uhr schrieb piorunz <piorunz@gmx.com>:
>
>> Is it safe & recommended to run autodefrag mount option in fstab?
>
> I've tried autodefrag a few times, and usually it caused btrfs to
> explode under some loads (usually databases and VMs), ending in
> invalid tree states, and after reboot the FS was unmountable. This may
> have changed meanwhile (last time I tried was in the 5.10 series).
> YMMV. Run and test your backups.
>
>
>> I am considering two machines here, normal desktop which has Btrfs as
>> /home, and server with VM and other databases also btrfs /home. Both
>> Btrfs RAID10 types. Both are heavily fragmented. I never defragmented
>> them, in fact. I run Debian 11 on server (kernel 5.10) and Debian
>> Testing (kernel 5.15) on desktop.
>
> Database and VM workloads are not well suited for btrfs-cow. I'd
> consider using `chattr +C` on the directories storing such data, then
> backup the contents, purge the directory empty, and restore the
> contents, thus properly recreating the files in nocow mode. This
> allows the databases and VMs to write data in-place. You're losing
> transactional guarantees and checksums but at least for databases,
> this is probably better left to the database itself anyways. For VMs
> it depends, usually the embedded VM filesystem running in the images
> should detect errors properly. That said, qemu qcow2 works very well
> for me even with cow but I disabled compression (`chattr +m`) for the
> images directory ("+m" is supported by recent chattr versions).
>
>
>> Running manual defrag on server machine, like:
>> sudo btrfs filesystem defrag -v -t4G -r /home
>> takes ages and can cause 120 second timeout kernel error in dmesg due to
>> service timeouts. I prefer to autodefrag gradually, overtime, mount
>> option seems to be good for that.
>
> This is probably the worst scenario you can create: forcing
> compression forces extents to be no bigger than 128k, which in turn
> increases IO overhead, and encourages fragmentation a lot. Since you
> are forcing compression, setting a target size of 4G probably does
> nothing, your extents will end up with 128k size.
>
> I also found that depending on your workload, RAID10 may not be
> beneficial at all because IO will always engage all spindles. In a
> multi-process environment, a non-striping mode may be better (e.g.
> RAID1). The high fragmentation would emphasize this bottleneck a lot.
>
>
>> My current fstab mounting:
>>
>> noatime,space_cache=v2,compress-force=zstd:3 0 2
>>
>> Will autodefrag break COW files? Like I copy paste a file and I save
>> space, but defrag with destroy this space saving?
>
> Yes, it will. You could run the bees daemon instead to recombine
> duplicate extents. It usually gives better space savings than forcing
> compression. Using forced compression is probably only useful for
> archive storage, or when every single byte counts.
>
>
>> Also, will autodefrag compress files automatically, as mount option
>> enforces (compress-force=zstd:3)?
>
> It should, but I never tried. Compression is usually only skipped for
> very small extents (when it wouldn't save a block), or for inline
> extents. If you run without forced compression, a heuristic is used
> for whether compressing an extent.
>
>
> Regards,
> Kai


--
With kindest regards, Piotr.

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org/
⠈⠳⣄⠀⠀⠀⠀

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-28 12:14   ` piorunz
@ 2022-01-28 12:42     ` Qu Wenruo
  2022-01-29  1:37       ` piorunz
  0 siblings, 1 reply; 21+ messages in thread
From: Qu Wenruo @ 2022-01-28 12:42 UTC (permalink / raw)
  To: piorunz, Kai Krakow; +Cc: linux-btrfs



On 2022/1/28 20:14, piorunz wrote:
> Hi all,
>
> Chris, Qu, Kai, thanks for all your replies. :)
>
> As I mentioned, I never considered defragmenting before, so I am not
> experienced in this matter, but now I decided to tackle this problem a
> bit more seriously.
> So yes, I run /home RAID10 on 4 HDDs and I don't want to change it at
> this time. It's running quite fine. I'd love to run RAID 5 or 6 but I
> know this functionality is not ready.
> HDDs are 2TB each in my server, and 500GB each in desktop.
>
> So, I see that solution I found while searching internet (defrag with
> -t4G) is not suitable for compressed Btrfs.
> Also, autodefrag will not work in my case. I use 5.10 and 5.15 kernels.
>
> So, I should manually run:
> sudo btrfs filesystem defrag -v -t128K -r /home
> That will be more suitable, right?

Yep, more suitable for your kernels.

BTW, there is already patch submitted to address some problems with
compressed extents.
Like not to defrag compressed extents which are already at its max size.

For now, your -t128k would emulate that.

>
> I know performance can be improved for databases or VMs I run (6x
> Windows running 24/7), but I don't want to complicate systems more than
> they already are, so I will skip manual attr changes at this point, let
> me try one thing at a time. :)
>
> If manual defragging with -t128K is correct, I can run that regularly,
> please let me know if that would be your consensus guys. Thank you.

With -t 128k you may hit the 120s timeout problem less frequently.

Although sometimes you may want to further reduce the value to something
like 64K, to reduce the workload.

I may craft a fix to add cond_resche() inside btrfs_defrag_file() so
that can be backported to v5.15, to let the problem to be gone for good.

Thanks,
Qu

>
> On 28/01/2022 11:55, Kai Krakow wrote:
>> Hi!
>>
>> Am Fr., 28. Jan. 2022 um 08:51 Uhr schrieb piorunz <piorunz@gmx.com>:
>>
>>> Is it safe & recommended to run autodefrag mount option in fstab?
>>
>> I've tried autodefrag a few times, and usually it caused btrfs to
>> explode under some loads (usually databases and VMs), ending in
>> invalid tree states, and after reboot the FS was unmountable. This may
>> have changed meanwhile (last time I tried was in the 5.10 series).
>> YMMV. Run and test your backups.
>>
>>
>>> I am considering two machines here, normal desktop which has Btrfs as
>>> /home, and server with VM and other databases also btrfs /home. Both
>>> Btrfs RAID10 types. Both are heavily fragmented. I never defragmented
>>> them, in fact. I run Debian 11 on server (kernel 5.10) and Debian
>>> Testing (kernel 5.15) on desktop.
>>
>> Database and VM workloads are not well suited for btrfs-cow. I'd
>> consider using `chattr +C` on the directories storing such data, then
>> backup the contents, purge the directory empty, and restore the
>> contents, thus properly recreating the files in nocow mode. This
>> allows the databases and VMs to write data in-place. You're losing
>> transactional guarantees and checksums but at least for databases,
>> this is probably better left to the database itself anyways. For VMs
>> it depends, usually the embedded VM filesystem running in the images
>> should detect errors properly. That said, qemu qcow2 works very well
>> for me even with cow but I disabled compression (`chattr +m`) for the
>> images directory ("+m" is supported by recent chattr versions).
>>
>>
>>> Running manual defrag on server machine, like:
>>> sudo btrfs filesystem defrag -v -t4G -r /home
>>> takes ages and can cause 120 second timeout kernel error in dmesg due to
>>> service timeouts. I prefer to autodefrag gradually, overtime, mount
>>> option seems to be good for that.
>>
>> This is probably the worst scenario you can create: forcing
>> compression forces extents to be no bigger than 128k, which in turn
>> increases IO overhead, and encourages fragmentation a lot. Since you
>> are forcing compression, setting a target size of 4G probably does
>> nothing, your extents will end up with 128k size.
>>
>> I also found that depending on your workload, RAID10 may not be
>> beneficial at all because IO will always engage all spindles. In a
>> multi-process environment, a non-striping mode may be better (e.g.
>> RAID1). The high fragmentation would emphasize this bottleneck a lot.
>>
>>
>>> My current fstab mounting:
>>>
>>> noatime,space_cache=v2,compress-force=zstd:3 0 2
>>>
>>> Will autodefrag break COW files? Like I copy paste a file and I save
>>> space, but defrag with destroy this space saving?
>>
>> Yes, it will. You could run the bees daemon instead to recombine
>> duplicate extents. It usually gives better space savings than forcing
>> compression. Using forced compression is probably only useful for
>> archive storage, or when every single byte counts.
>>
>>
>>> Also, will autodefrag compress files automatically, as mount option
>>> enforces (compress-force=zstd:3)?
>>
>> It should, but I never tried. Compression is usually only skipped for
>> very small extents (when it wouldn't save a block), or for inline
>> extents. If you run without forced compression, a heuristic is used
>> for whether compressing an extent.
>>
>>
>> Regards,
>> Kai
>
>
> --
> With kindest regards, Piotr.
>
> ⢀⣴⠾⠻⢶⣦⠀
> ⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
> ⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org/
> ⠈⠳⣄⠀⠀⠀⠀

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-28 11:55 ` Kai Krakow
  2022-01-28 12:14   ` piorunz
@ 2022-01-28 13:00   ` Qu Wenruo
  2022-01-28 15:48     ` Kai Krakow
  2022-01-28 16:05   ` Remi Gauvin
  2022-01-28 16:25   ` Remi Gauvin
  3 siblings, 1 reply; 21+ messages in thread
From: Qu Wenruo @ 2022-01-28 13:00 UTC (permalink / raw)
  To: Kai Krakow, piorunz; +Cc: linux-btrfs



On 2022/1/28 19:55, Kai Krakow wrote:
> Hi!
>
> Am Fr., 28. Jan. 2022 um 08:51 Uhr schrieb piorunz <piorunz@gmx.com>:
>
>> Is it safe & recommended to run autodefrag mount option in fstab?
>
> I've tried autodefrag a few times, and usually it caused btrfs to
> explode under some loads (usually databases and VMs), ending in
> invalid tree states, and after reboot the FS was unmountable. This may
> have changed meanwhile (last time I tried was in the 5.10 series).
> YMMV. Run and test your backups.

Mind to share more details?

"Run and test your backups" is always a good principle, but a shame to a
fs developer.

It's better to be sure the problems you hit is already fixed, especially
we're already doing bug hunts in defrag code recently, don't let the
chance escape.

>
>
>> I am considering two machines here, normal desktop which has Btrfs as
>> /home, and server with VM and other databases also btrfs /home. Both
>> Btrfs RAID10 types. Both are heavily fragmented. I never defragmented
>> them, in fact. I run Debian 11 on server (kernel 5.10) and Debian
>> Testing (kernel 5.15) on desktop.
>
> Database and VM workloads are not well suited for btrfs-cow. I'd
> consider using `chattr +C` on the directories storing such data, then
> backup the contents, purge the directory empty, and restore the
> contents, thus properly recreating the files in nocow mode. This
> allows the databases and VMs to write data in-place. You're losing
> transactional guarantees and checksums but at least for databases,
> this is probably better left to the database itself anyways. For VMs
> it depends, usually the embedded VM filesystem running in the images
> should detect errors properly. That said, qemu qcow2 works very well
> for me even with cow but I disabled compression (`chattr +m`) for the
> images directory ("+m" is supported by recent chattr versions).

This may be off-topic as it's not defrag related anymore, but I have
some crazy ideas like forced full block range compression for such files.

Like even if you just dirtied one byte of such file (which has something
like forced_rewrite_block_size=16K), then the whole 16K aligned range
will be re-dirtied, and forced to be written back (and forced to be
compressed if it can).

By this, the behavior is still CoW, thus it has csum/compression, and
the extra IO overhead may be saved by compression.

And the on-disk file extents will no longer have cases like CoWed in the
middle of a larger extents, thus has some benefit of nodatacow.

For now it's just one of my wild dreams, and not any benchmark/prototype
to back my dream...

Thanks,
Qu
>
>
>> Running manual defrag on server machine, like:
>> sudo btrfs filesystem defrag -v -t4G -r /home
>> takes ages and can cause 120 second timeout kernel error in dmesg due to
>> service timeouts. I prefer to autodefrag gradually, overtime, mount
>> option seems to be good for that.
>
> This is probably the worst scenario you can create: forcing
> compression forces extents to be no bigger than 128k, which in turn
> increases IO overhead, and encourages fragmentation a lot. Since you
> are forcing compression, setting a target size of 4G probably does
> nothing, your extents will end up with 128k size.
>
> I also found that depending on your workload, RAID10 may not be
> beneficial at all because IO will always engage all spindles. In a
> multi-process environment, a non-striping mode may be better (e.g.
> RAID1). The high fragmentation would emphasize this bottleneck a lot.
>
>
>> My current fstab mounting:
>>
>> noatime,space_cache=v2,compress-force=zstd:3 0 2
>>
>> Will autodefrag break COW files? Like I copy paste a file and I save
>> space, but defrag with destroy this space saving?
>
> Yes, it will. You could run the bees daemon instead to recombine
> duplicate extents. It usually gives better space savings than forcing
> compression. Using forced compression is probably only useful for
> archive storage, or when every single byte counts.
>
>
>> Also, will autodefrag compress files automatically, as mount option
>> enforces (compress-force=zstd:3)?
>
> It should, but I never tried. Compression is usually only skipped for
> very small extents (when it wouldn't save a block), or for inline
> extents. If you run without forced compression, a heuristic is used
> for whether compressing an extent.
>
>
> Regards,
> Kai

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-28 13:00   ` Qu Wenruo
@ 2022-01-28 15:48     ` Kai Krakow
  0 siblings, 0 replies; 21+ messages in thread
From: Kai Krakow @ 2022-01-28 15:48 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: piorunz, linux-btrfs

Hey Qu!

Am Fr., 28. Jan. 2022 um 14:01 Uhr schrieb Qu Wenruo <quwenruo.btrfs@gmx.com>:
> > I've tried autodefrag a few times, and usually it caused btrfs to
> > explode under some loads (usually databases and VMs), ending in
> > invalid tree states, and after reboot the FS was unmountable. This may
> > have changed meanwhile (last time I tried was in the 5.10 series).
> > YMMV. Run and test your backups.
>
> Mind to share more details?

I surely will when I hit that case again. Usually, I was able to
provoke situations where btrfs would spool "object not found, error
-28" (if my mind doesn't fool me) to dmesg. After this, the FS won't
mount again. Usually, bcache was involved but I don't think there's a
problem in bcache at this stage, it even happens without bcache
writeback, and it never happened without autodefrag. I'm pretty sure
there's a sync/serialization bug in autodefrag somewhere, it seems to
work just fine when running "btrfs filesystem defrag" (although I was
able to freeze the kernel with huge defrags but usually after a hard
reboot, the system was just fine). Device write caching is disabled
(if I can believe the drive firmware), so hard reboots never failed me
in the recent past. Except for the autodefrag case, btrfs has been
working super rock solid for me.


> "Run and test your backups" is always a good principle, but a shame to a
> fs developer.

Well, as written above, it has been super rock solid for me during the
past year - big thumbs up to you developers. The few losses I had were
due to some buggy writeback behavior of bcache together with my SSD
firmware - using bcache in write-around mode works around that. I'm
not even sure if bcache is to blame here (except for the 5.15.2 sector
size bug), and since using the data type hinting patches to place
metadata directly on SSD, and data directly on HDD only, the need for
using writeback bcache has been greatly reduced. Btrfs itself has
survived several hard reboots since then, either due to power loss, or
due to GPU freezes, or unsafe wake from hibernation. It just works,
but I didn't try autodefrag to stay on the safe side (and since I'm
using bees, and autodefrag breaks extent sharing, I'm not really
wanting to actively use autodefrag). The one time I used autodefrag,
it worked until I booted a qemu machine, error -28 was logged, and
then the FS was gone for good after reboot (open ctree failed). I
restored from backup and disabled autodefrag.

Maybe you want to recreate my setup of that time (5.10.y) and test for yourself:

4x HDD 3TB in meta=raid1 data=single mode, on top of one single bcache
(SSD 1TB), writeback mode doesn't seem to matter, bees running for
deduplication (1GB hash table), qemu with NTFS images (usafe caching
mode, cow or nocow doesn't seem to matter except for performance, but
raw vs qcow2 format may make the problem more or less frequent), other
VM solutions (VMware Workstation, VirtualBox) seem to provoke the
autodefrag problem much more often (especially with direct IO), write
flushing setting on the qemu image may have an impact, too.

My new setup since kernel 5.15 uses the metadata placement hinting
patches (https://github.com/kakra/linux/pull/20), and I've moved the
metadata to dedicated non-bcache partitions since then (which is
really a great and well performing setup, btw). Thus, I didn't test
autodefrag with that setup yet.


> It's better to be sure the problems you hit is already fixed, especially
> we're already doing bug hunts in defrag code recently, don't let the
> chance escape.

I may give it a retry later but currently I'm happy having to not
restore the FS (which takes 24h). I could probably try if defragging a
lot of files in parallel still freezes the system but since I'm
running with the metadata hinting patches, the whole process is
probably a lot more responsive anyways and may not even freeze the
system anymore.

In the end (at least for my case), it's probably better to have an
out-of-band defragger which integrates with bees. I think Zygo is
working on that (we chatted/mailed about how bees could recombine
extents, and how sorting of work is implemented currently).


[off-topic, maybe snip]
> > Database and VM workloads are not well suited for btrfs-cow. I'd
> > consider using `chattr +C` on the directories storing such data, then
> > backup the contents, purge the directory empty, and restore the
> > contents, thus properly recreating the files in nocow mode. This
> > allows the databases and VMs to write data in-place. You're losing
> > transactional guarantees and checksums but at least for databases,
> > this is probably better left to the database itself anyways. For VMs
> > it depends, usually the embedded VM filesystem running in the images
> > should detect errors properly. That said, qemu qcow2 works very well
> > for me even with cow but I disabled compression (`chattr +m`) for the
> > images directory ("+m" is supported by recent chattr versions).
>
> This may be off-topic as it's not defrag related anymore, but I have
> some crazy ideas like forced full block range compression for such files.

Happy to hear you're working on ideas, it's really appreciated.


> Like even if you just dirtied one byte of such file (which has something
> like forced_rewrite_block_size=16K), then the whole 16K aligned range
> will be re-dirtied, and forced to be written back (and forced to be
> compressed if it can).

At a first glance, this sounds like a total waste of resources - but I
think this repays very quickly because it reduces metadata overhead.
It could also reduce overly heavy fragmentation due to compression,
and reduce the pressure of orphan/hidden extents parts. So, it sounds
good on second thought. :-)


Regards,
Kai

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-28 11:55 ` Kai Krakow
  2022-01-28 12:14   ` piorunz
  2022-01-28 13:00   ` Qu Wenruo
@ 2022-01-28 16:05   ` Remi Gauvin
  2022-01-28 18:01     ` Kai Krakow
  2022-01-28 22:42     ` Forza
  2022-01-28 16:25   ` Remi Gauvin
  3 siblings, 2 replies; 21+ messages in thread
From: Remi Gauvin @ 2022-01-28 16:05 UTC (permalink / raw)
  To: Kai Krakow, piorunz; +Cc: linux-btrfs

On 2022-01-28 6:55 a.m., Kai Krakow wrote:

> 
> Database and VM workloads are not well suited for btrfs-cow. I'd
> consider using `chattr +C` on the directories storing such data, then
> backup the contents, purge the directory empty, and restore the
> contents, thus properly recreating the files in nocow mode. This
> allows the databases and VMs to write data in-place. You're losing
> transactional guarantees and checksums but at least for databases,
> this is probably better left to the database itself anyways.

This might be critically bad idea combined with BTRFS RAID,, BTRFS does
not have any means to keeping the Raid mirrors consistent *other* than COW


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-28 11:55 ` Kai Krakow
                     ` (2 preceding siblings ...)
  2022-01-28 16:05   ` Remi Gauvin
@ 2022-01-28 16:25   ` Remi Gauvin
  2022-01-28 18:07     ` Kai Krakow
  3 siblings, 1 reply; 21+ messages in thread
From: Remi Gauvin @ 2022-01-28 16:25 UTC (permalink / raw)
  To: Kai Krakow, piorunz; +Cc: linux-btrfs

On 2022-01-28 6:55 a.m., Kai Krakow wrote:

> Yes, it will. You could run the bees daemon instead to recombine
> duplicate extents. It usually gives better space savings than forcing
> compression. Using forced compression is probably only useful for
> archive storage, or when every single byte counts.
> 
>

I have also found compress-force  (with lzo) to be very beneficial for
SATA SSD's.  Not only does it improve the sequential read speed, but
forcing the smaller extent size makes it easier to reclaim space lost to
fragmentation.

I found that heavily fragmented files, (with no snapshots or reflinks.)
could take almost 2X as much space as the file size unless they were
defragged with a large (100M) target size, which causes senseless wear
on SSD.  With compress-force, autodefrag (or manual 128k defrag)
alleviates this issue.

Conversely, compress-force on HDD completely destroys any semblance of
sequential read speed.




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-28 16:05   ` Remi Gauvin
@ 2022-01-28 18:01     ` Kai Krakow
  2022-01-28 18:09       ` Remi Gauvin
  2022-01-28 22:42     ` Forza
  1 sibling, 1 reply; 21+ messages in thread
From: Kai Krakow @ 2022-01-28 18:01 UTC (permalink / raw)
  To: Remi Gauvin; +Cc: piorunz, linux-btrfs

Am Fr., 28. Jan. 2022 um 17:05 Uhr schrieb Remi Gauvin <remi@georgianit.com>:
>
> On 2022-01-28 6:55 a.m., Kai Krakow wrote:
>
> >
> > Database and VM workloads are not well suited for btrfs-cow. I'd
> > consider using `chattr +C` on the directories storing such data, then
> > backup the contents, purge the directory empty, and restore the
> > contents, thus properly recreating the files in nocow mode. This
> > allows the databases and VMs to write data in-place. You're losing
> > transactional guarantees and checksums but at least for databases,
> > this is probably better left to the database itself anyways.
>
> This might be critically bad idea combined with BTRFS RAID,, BTRFS does
> not have any means to keeping the Raid mirrors consistent *other* than COW

Yeah, it's not better than traditional RAID but it's probably also not
worse. The question is: Is it worth the massively negative performance
effects on database and VM workloads? At least in this case: probably
not because it's on HDD. You're buying a consistency feature for a
real high price, doing an online replication to a standby DB instance
may be a better choice.

However, there are cow-friendly file formats: qcow2 for VM images
seems to work fine for me (I just disable compression because it
creates those vast amounts of extents, and rather let qcow2 do the
compression - which is read-only but well), raw format, although
simpler, was really hammering on btrfs even with nocow. For databases,
at least WAL mode of sqlite seems to work fine without too many
downsides. Not sure if other databases offer something similar.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-28 16:25   ` Remi Gauvin
@ 2022-01-28 18:07     ` Kai Krakow
  2022-01-28 18:23       ` Remi Gauvin
  0 siblings, 1 reply; 21+ messages in thread
From: Kai Krakow @ 2022-01-28 18:07 UTC (permalink / raw)
  To: Remi Gauvin; +Cc: piorunz, linux-btrfs

Am Fr., 28. Jan. 2022 um 17:25 Uhr schrieb Remi Gauvin <remi@georgianit.com>:
>
> On 2022-01-28 6:55 a.m., Kai Krakow wrote:
>
> > Yes, it will. You could run the bees daemon instead to recombine
> > duplicate extents. It usually gives better space savings than forcing
> > compression. Using forced compression is probably only useful for
> > archive storage, or when every single byte counts.
> >
> >
>
> I have also found compress-force  (with lzo) to be very beneficial for
> SATA SSD's.  Not only does it improve the sequential read speed, but
> forcing the smaller extent size makes it easier to reclaim space lost to
> fragmentation.
>
> I found that heavily fragmented files, (with no snapshots or reflinks.)
> could take almost 2X as much space as the file size unless they were
> defragged with a large (100M) target size, which causes senseless wear
> on SSD.  With compress-force, autodefrag (or manual 128k defrag)
> alleviates this issue.
>
> Conversely, compress-force on HDD completely destroys any semblance of
> sequential read speed.

Interesting observations, I'll keep that in mind. I never used btrfs
natively on SSD, I started using hybrid solutions some years back now
(bcache) and lately moved metadata to native SSD (which improves btrfs
performance by a lot).

So the question is: Does the space and performance overhead just come
from metadata? And using compress-force just "fixes" that from the
other side? For HDD, the seek overhead is probably the dominant
performance factor and you'd want to avoid that by all means.

With my new setup (metadata on SSD, data on bcache) the performance
difference between cold and hot bcache is really not that much: The
system feels smooth and responsive in both cases, this is even better
in both cases than metadata on bcache.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-28 18:01     ` Kai Krakow
@ 2022-01-28 18:09       ` Remi Gauvin
  2022-01-28 18:23         ` Kai Krakow
  0 siblings, 1 reply; 21+ messages in thread
From: Remi Gauvin @ 2022-01-28 18:09 UTC (permalink / raw)
  To: Kai Krakow; +Cc: linux-btrfs

On 2022-01-28 1:01 p.m., Kai Krakow wrote:

> 
> Yeah, it's not better than traditional RAID but it's probably also not
> worse. 

Oh, it's worse, it's much much worse.  Any time there is an interruption
(power failure, or system crash) while a nocow file is being written,
the two copies in Raid will be different.... *Which* copy is read at any
given time can be a crap shot....

Imagine if different database processes see a completed and a non
completed transaction at the same time?

And there's no way to fix it other than full balance, (or copy re-write
the file).. Even manaully running a scrub will not synchronize the mirrors.




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-28 18:09       ` Remi Gauvin
@ 2022-01-28 18:23         ` Kai Krakow
  2022-01-28 18:29           ` Remi Gauvin
  2022-01-28 18:33           ` Remi Gauvin
  0 siblings, 2 replies; 21+ messages in thread
From: Kai Krakow @ 2022-01-28 18:23 UTC (permalink / raw)
  To: Remi Gauvin; +Cc: linux-btrfs

Am Fr., 28. Jan. 2022 um 19:09 Uhr schrieb Remi Gauvin <remi@georgianit.com>:
>
> On 2022-01-28 1:01 p.m., Kai Krakow wrote:
>
> >
> > Yeah, it's not better than traditional RAID but it's probably also not
> > worse.
>
> Oh, it's worse, it's much much worse.  Any time there is an interruption
> (power failure, or system crash) while a nocow file is being written,
> the two copies in Raid will be different.... *Which* copy is read at any
> given time can be a crap shot....

How is this different from mdraid or hardware raid then? These also
don't know the correct copy after partial or incomplete writes - and
return arbitrary data based on which mirror is chosen.


> Imagine if different database processes see a completed and a non
> completed transaction at the same time?

I never assumed something else. But database transactions should still
protect against this case: Either the transaction checksum matches -
or it doesn't. And any previous data should have been flushed properly
and verified already even before that last transaction becomes
current. A process won't see a completed and non-completed transaction
at the same time because it reads the data once, it cannot be
Schroedingers data.


> And there's no way to fix it other than full balance, (or copy re-write
> the file).. Even manaully running a scrub will not synchronize the mirrors.

How should it do it? Scrub doesn't know which one is the correct
mirror in that case. But that's not different from traditional RAID,
just that those will synchronize by overwriting what it thinks is the
older/out-of-sync mirror - which may not always be correct. We had
those cases many years back - both mdraid and hardware RAID.

The only difference here is that btrfs cannot actually buffer a write
to replay it after recovering from a crash - so there's a write hole
which proper hardware RAID may not have.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-28 18:07     ` Kai Krakow
@ 2022-01-28 18:23       ` Remi Gauvin
  0 siblings, 0 replies; 21+ messages in thread
From: Remi Gauvin @ 2022-01-28 18:23 UTC (permalink / raw)
  To: Kai Krakow; +Cc: linux-btrfs

On 2022-01-28 1:07 p.m., Kai Krakow wrote:

> 
> So the question is: Does the space and performance overhead just come
> from metadata? And using compress-force just "fixes" that from the
> other side? For HDD, the seek overhead is probably the dominant
> performance factor and you'd want to avoid that by all means.
> 


The Space overhead happens because CoW extents are immutable, and are
not removed or modified until none of it is referenced by a file.

So for a VM image example,, you write out a fresh image, and most of it
it is in 128MB extents.  Your VM scribbles litterally 10's of thousands
4k writes all over,  Each of those 4k writes is a new extent.  Even
though it replaces the data in the old 128MB extent, that space is now
consumed twice.


I don't know why compressed sequential read is so slow on HDD with
compression.. I suspect there's some optimization that can be done to
fix this, because the same data can be *written* sequentially at full
speed, but not read back.





^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-28 18:23         ` Kai Krakow
@ 2022-01-28 18:29           ` Remi Gauvin
  2022-01-28 18:44             ` Roman Mamedov
  2022-01-28 18:33           ` Remi Gauvin
  1 sibling, 1 reply; 21+ messages in thread
From: Remi Gauvin @ 2022-01-28 18:29 UTC (permalink / raw)
  To: Kai Krakow; +Cc: linux-btrfs

On 2022-01-28 1:23 p.m., Kai Krakow wrote:

> 
> How is this different from mdraid or hardware raid then? These also
> don't know the correct copy after partial or incomplete writes - and
> return arbitrary data based on which mirror is chosen.
> 


MDraid, and any hardware raid I'm aware of, will resynchronize the 2
copies whenever there is an unclean shutdown.  which copy becomes the
new master is arbitrary, but both will be the same.

BTRFS raid with nocow will intsead have what I call quantum bits.  They
can be 0 and 1 at the same time.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-28 18:23         ` Kai Krakow
  2022-01-28 18:29           ` Remi Gauvin
@ 2022-01-28 18:33           ` Remi Gauvin
  1 sibling, 0 replies; 21+ messages in thread
From: Remi Gauvin @ 2022-01-28 18:33 UTC (permalink / raw)
  To: Kai Krakow; +Cc: linux-btrfs

On 2022-01-28 1:23 p.m., Kai Krakow wrote:

> 
> I never assumed something else. But database transactions should still
> protect against this case: Either the transaction checksum matches -
> or it doesn't. And any previous data should have been flushed properly
> and verified already even before that last transaction becomes
> current. A process won't see a completed and non-completed transaction
> at the same time because it reads the data once, it cannot be
> Schroedingers data.

That's exactly my point.. this is what BTRFS does...  2 copies of your
database, but they are different, so transaction can be complete in one,
not complete in the other.  The database program can not detect this
error, because it will only see 1 of the copies,,at that point in time.




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-28 18:29           ` Remi Gauvin
@ 2022-01-28 18:44             ` Roman Mamedov
  0 siblings, 0 replies; 21+ messages in thread
From: Roman Mamedov @ 2022-01-28 18:44 UTC (permalink / raw)
  To: Remi Gauvin; +Cc: Kai Krakow, linux-btrfs

On Fri, 28 Jan 2022 13:29:22 -0500
Remi Gauvin <remi@georgianit.com> wrote:

> MDraid, and any hardware raid I'm aware of, will resynchronize the 2
> copies whenever there is an unclean shutdown.  which copy becomes the
> new master is arbitrary, but both will be the same.

I believe mdraid writes event count to each device, and the device with the
highest event count (which has seen completed writes as late in time prior to
shutdown as possible) will be used as the source for synchronizing the other
ones.

Secondly, it has a write intent bitmap, to only resync the areas that are
different on that chosen device compared to the others.

-- 
With respect,
Roman

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-28 16:05   ` Remi Gauvin
  2022-01-28 18:01     ` Kai Krakow
@ 2022-01-28 22:42     ` Forza
  1 sibling, 0 replies; 21+ messages in thread
From: Forza @ 2022-01-28 22:42 UTC (permalink / raw)
  To: Remi Gauvin, Kai Krakow, piorunz, linux-btrfs Mailinglist; +Cc: linux-btrfs



---- From: Remi Gauvin <remi@georgianit.com> -- Sent: 2022-01-28 - 17:05 ----

> On 2022-01-28 6:55 a.m., Kai Krakow wrote:
> 
>> 
>> Database and VM workloads are not well suited for btrfs-cow. I'd
>> consider using `chattr +C` on the directories storing such data, then
>> backup the contents, purge the directory empty, and restore the
>> contents, thus properly recreating the files in nocow mode. This
>> allows the databases and VMs to write data in-place. You're losing
>> transactional guarantees and checksums but at least for databases,
>> this is probably better left to the database itself anyways.
> 
> This might be critically bad idea combined with BTRFS RAID,, BTRFS does
> not have any means to keeping the Raid mirrors consistent *other* than COW
> 

I agree here. Btrfs default mode is cow and csums to be consistent, crash-safe and protect against bit-rot and other underlying corruptions. It probably is not a good advise to disable these features without clearly explaining the implications. 

The first thing I would do is to actually see if there is a big bottleneck at all, and if there is, look at tuning the application for Btrfs. For example Innodb (MySQL/MariaDB) employs double writes per default. This is unnecessary on Btrfs. With SQLite there is Write Ahead Logging (WAL) mode than should be considered. This is an append log with periodic merging. It is much better suited for Btrfs (*). 

If application tuning isn't enough, I'd look at changing underlying storage; HDD->SSD->Nvme or enabling wider raid modes. I'm hoping that the metadata on SSD patches will make it to mainline soon, as they also improve things a lot.

As a last resort I would go with nodatacow, while understanding the risks. Remember that even if you use snapshots and Btrfs send|receive, you don't get protection from csums, which means silent corruption could get copied to your backups.

With regards from csums in databases. You still need a way to check and verify backups. Not all databases can repair a damaged table without restoring from backups. 

* https://wiki.tnonline.net/w/Blog/SQLite_Performance_on_Btrfs

Thanks



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-27 15:25 fstab autodegrag with 5.10 & 5.15 kernels, Debian? piorunz
                   ` (2 preceding siblings ...)
  2022-01-28 11:55 ` Kai Krakow
@ 2022-01-29  1:33 ` piorunz
  3 siblings, 0 replies; 21+ messages in thread
From: piorunz @ 2022-01-29  1:33 UTC (permalink / raw)
  To: linux-btrfs

I have problem with my last reply which didn't show. Testing.



-- 
With kindest regards, Piotr.

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org/
⠈⠳⣄⠀⠀⠀⠀

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: fstab autodegrag with 5.10 & 5.15 kernels, Debian?
  2022-01-28 12:42     ` Qu Wenruo
@ 2022-01-29  1:37       ` piorunz
  0 siblings, 0 replies; 21+ messages in thread
From: piorunz @ 2022-01-29  1:37 UTC (permalink / raw)
  To: Qu Wenruo, Kai Krakow; +Cc: linux-btrfs

On 28/01/2022 12:42, Qu Wenruo wrote:
>
> With -t 128k you may hit the 120s timeout problem less frequently.

Will do. Thank you.
>
> Although sometimes you may want to further reduce the value to something
> like 64K, to reduce the workload.
>
> I may craft a fix to add cond_resche() inside btrfs_defrag_file() so
> that can be backported to v5.15, to let the problem to be gone for good.

That would be awesome. Please do. Kernel 5.15 is LTS so many people will
stick with it for a long time to come.

--
With kindest regards, Piotr.

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org/
⠈⠳⣄⠀⠀⠀⠀

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2022-01-29  1:38 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-27 15:25 fstab autodegrag with 5.10 & 5.15 kernels, Debian? piorunz
2022-01-27 21:14 ` Chris Murphy
2022-01-27 22:45 ` Qu Wenruo
2022-01-28 11:55 ` Kai Krakow
2022-01-28 12:14   ` piorunz
2022-01-28 12:42     ` Qu Wenruo
2022-01-29  1:37       ` piorunz
2022-01-28 13:00   ` Qu Wenruo
2022-01-28 15:48     ` Kai Krakow
2022-01-28 16:05   ` Remi Gauvin
2022-01-28 18:01     ` Kai Krakow
2022-01-28 18:09       ` Remi Gauvin
2022-01-28 18:23         ` Kai Krakow
2022-01-28 18:29           ` Remi Gauvin
2022-01-28 18:44             ` Roman Mamedov
2022-01-28 18:33           ` Remi Gauvin
2022-01-28 22:42     ` Forza
2022-01-28 16:25   ` Remi Gauvin
2022-01-28 18:07     ` Kai Krakow
2022-01-28 18:23       ` Remi Gauvin
2022-01-29  1:33 ` piorunz

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.