All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andreas Dilger <adilger@dilger.ca>
To: Reindl Harald <h.reindl@thelounge.net>
Cc: Lukas Czerner <lczerner@redhat.com>,
	Wang Shilong <wangshilong1991@gmail.com>,
	linux-ext4@vger.kernel.org, Wang Shilong <wshilong@ddn.com>,
	Shuichi Ihara <sihara@ddn.com>
Subject: Re: [PATCH] ext4: introduce EXT4_BG_WAS_TRIMMED to optimize trim
Date: Wed, 27 May 2020 13:11:55 -0600	[thread overview]
Message-ID: <7ED9AC71-D768-44BE-A227-95F876A4C1DF@dilger.ca> (raw)
In-Reply-To: <ece9cd79-6d97-db36-66bb-f02bd6bf6573@thelounge.net>

[-- Attachment #1: Type: text/plain, Size: 4750 bytes --]

On May 27, 2020, at 4:56 AM, Reindl Harald <h.reindl@thelounge.net> wrote:
> Am 27.05.20 um 12:32 schrieb Lukas Czerner:
>> On Wed, May 27, 2020 at 12:11:52PM +0200, Reindl Harald wrote:
>>> 
>>> Am 27.05.20 um 11:57 schrieb Lukas Czerner:
>>>> On Wed, May 27, 2020 at 11:32:02AM +0200, Reindl Harald wrote:
>>>>> would that also fix the issue that *way too much* is trimmed all the
>>>>> time, no matter if it's a thin provisioned vmware disk or a phyiscal
>>>>> RAID10 with SSD
>>>> 
>>>> no, the mechanism remains the same, but the proposal is to make it
>>>> pesisten across re-mounts.
>>>> 
>>>>> 
>>>>> no way of 315 MB deletes within 2 hours or so on a system with just 485M
>>>>> used
>>>> 
>>>> The reason is that we're working on block group granularity. So if you
>>>> have almost free block group, and you free some blocks from it, the flag
>>>> gets freed and next time you run fstrim it'll trim all the free space in
>>>> the group. Then again if you free some blocks from the group, the flags
>>>> gets cleared again ...
>>>> 
>>>> But I don't think this is a problem at all. Certainly not worth tracking
>>>> free/trimmed extents to solve it.
>>> 
>>> it is a problem
>>> 
>>> on a daily "fstrim -av" you trim gigabytes of alredy trimmed blocks
>>> which for example on a vmware thin provisioned vdisk makes it down to
>>> CBT (changed-block-tracking)
>>> 
>>> so instead completly ignore that untouched space thanks to CBT it's
>>> considered as changed and verified in the follow up backup run which
>>> takes magnitutdes longer than needed
>> 
>> Looks like you identified the problem then ;)
> 
> well, in a perfect world.....
> 
>> But seriously, trim/discard was always considered advisory and the
>> storage is completely free to do whatever it wants to do with the
>> information. I might even be the case that the discard requests are
>> ignored and we might not even need optimization like this. But
>> regardless it does take time to go through the block gropus and as a
>> result this optimization is useful in the fs itself.
> 
> luckily at least fstrim is non-blocking in a vmware environment, on my
> physical box it takes ages
> 
> this machine *does nothing* than wait to be cloned, 235 MB pretended
> deleted data within 50 minutes is absurd on a completly idle guest
> 
> so even when i am all in for optimizations thatÄs way over top
> 
> [root@master:~]$ fstrim -av
> /boot: 0 B (0 bytes) trimmed on /dev/sda1
> /: 235.8 MiB (247201792 bytes) trimmed on /dev/sdb1
> 
> [root@master:~]$ df
> Filesystem     Type  Size  Used Avail Use% Mounted on
> /dev/sdb1      ext4  5.8G  502M  5.3G   9% /
> /dev/sda1      ext4  485M   39M  443M   9% /boot


I don't think that this patch will necessarily fix the problem you
are seeing, in the sense that WAS_TRIMMED was previously stored in
the group descriptor in memory, so repeated fstrim runs _shouldn't_
result in the group being trimmed unless it had some blocks freed.
If your image has even one block freed in any group, then fstrim will
result in all of the free blocks in that group being trimmed again.

That said, I think a follow-on optimization would be to limit *clearing*
the WAS_TRIMMED flag until at least some minimum number of blocks have
been freed (e.g. at least 1024 blocks freed, or the group is "empty", or
similar).  That would avoid excess TRIM calls down to the storage when
only a few blocks were freed that would be unlikely to actually result
in an erase blocks being freed.  This size could be dynamic based on
the minimum trim size passed by fstrim (e.g. min(1024 * minblocks, 16MB)).

That would also fit in nicely with changing "-o discard" over to using
per-block group tracking of the trim state, and use the same mechanism
as fstrim, rather than using the per-extent tracking that is used today
and causes (IMHO) too many small trims to the storage to be useful.
Not only do many small trim requests cause overhead during IO, but they
can also miss actual trims because the individual extent are smaller
than the discard size of the storage, and it doesn't combine the trim
range with adjacent free blocks.

Doing the block-group-based trim in the background, when a group has had
a bunch of blocks freed, and when the filesystem is idle (maybe a percpu
counter that is incremented whenever user read/write requests are done,
that can be checked for idleness when there are groups to be trimmed)
doesn't have to be perfect or totally crash proof, since it will always
have another chance to trim the group the next time some blocks are freed,
or if manual "fstrim" is called on the group with a smaller minlen.

Cheers, Andreas






[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 873 bytes --]

  reply	other threads:[~2020-05-27 19:12 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-27  7:38 [PATCH] ext4: introduce EXT4_BG_WAS_TRIMMED to optimize trim Wang Shilong
2020-05-27  9:19 ` Lukas Czerner
2020-05-27  9:32   ` Reindl Harald
2020-05-27  9:57     ` Lukas Czerner
2020-05-27 10:11       ` Reindl Harald
2020-05-27 10:32         ` Lukas Czerner
2020-05-27 10:56           ` Reindl Harald
2020-05-27 19:11             ` Andreas Dilger [this message]
2020-05-27 10:06   ` Wang Shilong
2020-05-27 10:21     ` Lukas Czerner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7ED9AC71-D768-44BE-A227-95F876A4C1DF@dilger.ca \
    --to=adilger@dilger.ca \
    --cc=h.reindl@thelounge.net \
    --cc=lczerner@redhat.com \
    --cc=linux-ext4@vger.kernel.org \
    --cc=sihara@ddn.com \
    --cc=wangshilong1991@gmail.com \
    --cc=wshilong@ddn.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.