All of lore.kernel.org
 help / color / mirror / Atom feed
* Salvaging the performance of a high-metadata filesystem
@ 2023-03-03  4:34 Matt Corallo
  2023-03-03  5:22 ` Roman Mamedov
  2023-03-05  9:36 ` Lukas Straub
  0 siblings, 2 replies; 10+ messages in thread
From: Matt Corallo @ 2023-03-03  4:34 UTC (permalink / raw)
  To: Btrfs BTRFS

I have a ~seven year old BTRFS filesystem who's performance has slowly degraded to unusability.

Its a mix of eight 6-16TB 7200 RPM NAS spinning rust which has slowly upgraded over the years as 
drives failed. It was build back when raid1 was the only option, but metadata has since been 
converted to raid1c3. That process took a month or two, but was relatively painless.

The problem is there's one folder that has backups of workstation, which were done by `cp 
--reflink=always`ing the previous backup followed by rsync'ing over it. The latest backup has about 
3 million files, so each folder varies mostly around that number, but there's only < 100 backups.

This has led to a lot of metadata:
Metadata,RAID1C3: Size:1.48TiB, Used:1.46TiB (98.73%)

Sufficiently slow that when I tried to convert data to raid1c3 from raid1 I gave up about six months 
in when it was clear the finish date was still years out:
Data,RAID1: Size:21.13TiB, Used:21.07TiB (99.71%)
Data,RAID1C3: Size:5.94TiB, Used:5.46TiB (91.86%)

I recently started adding some I/O to the machine, writing 1MB/s or two of writes from openstack 
swift, which has now racked up a million or three files itself (in a directory tree two layers of 
~1000-folder directories deep). This has made the filesystem largely unusable.

The usual every-30-second commit takes upwards of ten minutes and locks the entire filesystem for 
much of that commit time. The actual bandwidth of writes is trivially manageable, and if I set the 
commit time to something absurd like an hour, the filesystem is very usable.

I assume there's not much to be done here - the volume needs to move off of BTRFS onto something 
that can better handle lots of files? The metadata-device-preference patches don't seem to be making 
any progress (but from what I understand would very likely trivially solve this issue?).


Thanks,
Matt

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-03-05  9:49 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-03  4:34 Salvaging the performance of a high-metadata filesystem Matt Corallo
2023-03-03  5:22 ` Roman Mamedov
2023-03-03  9:30   ` Forza
2023-03-03 19:04     ` Matt Corallo
2023-03-03 19:05       ` Matt Corallo
2023-03-04  8:24         ` Forza
2023-03-04 17:25           ` Goffredo Baroncelli
2023-03-05  1:22           ` Matt Corallo
2023-03-05  8:23             ` Forza
2023-03-05  9:36 ` Lukas Straub

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.