* [PATCH v3 0/2] xfs: set aside allocation btree blocks from block reservation @ 2021-03-18 16:17 Brian Foster 2021-03-18 16:17 ` [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active Brian Foster ` (2 more replies) 0 siblings, 3 replies; 23+ messages in thread From: Brian Foster @ 2021-03-18 16:17 UTC (permalink / raw) To: linux-xfs Hi all, This is v3 of the allocbt block set aside fixup. The primary change in v3 is to filter out rmapbt blocks from the usage accounting. rmapbt blocks live in free space similar to allocbt blocks, but are managed appropriately via perag reservation and so should not be set aside from reservation requests. Brian v3: - Use a mount flag for easy detection of active perag reservation. - Filter rmapbt blocks from allocbt block accounting. v2: https://lore.kernel.org/linux-xfs/20210222152108.896178-1-bfoster@redhat.com/ - Use an atomic counter instead of a percpu counter. v1: https://lore.kernel.org/linux-xfs/20210217132339.651020-1-bfoster@redhat.com/ Brian Foster (2): xfs: set a mount flag when perag reservation is active xfs: set aside allocation btree blocks from block reservation fs/xfs/libxfs/xfs_ag_resv.c | 24 ++++++++++++++---------- fs/xfs/libxfs/xfs_alloc.c | 12 ++++++++++++ fs/xfs/libxfs/xfs_alloc_btree.c | 2 ++ fs/xfs/xfs_mount.c | 18 +++++++++++++++++- fs/xfs/xfs_mount.h | 7 +++++++ 5 files changed, 52 insertions(+), 11 deletions(-) -- 2.26.2 ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active 2021-03-18 16:17 [PATCH v3 0/2] xfs: set aside allocation btree blocks from block reservation Brian Foster @ 2021-03-18 16:17 ` Brian Foster 2021-03-18 20:55 ` Dave Chinner 2021-03-18 16:17 ` [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation Brian Foster 2021-04-09 14:17 ` [PATCH v3 0/2] " Brian Foster 2 siblings, 1 reply; 23+ messages in thread From: Brian Foster @ 2021-03-18 16:17 UTC (permalink / raw) To: linux-xfs perag reservation is enabled at mount time on a per AG basis. The upcoming in-core allocation btree accounting mechanism needs to know when reservation is enabled and that all perag AGF contexts are initialized. As a preparation step, set a flag in the mount structure and unconditionally initialize the pagf on all mounts where at least one reservation is active. Signed-off-by: Brian Foster <bfoster@redhat.com> --- fs/xfs/libxfs/xfs_ag_resv.c | 24 ++++++++++++++---------- fs/xfs/xfs_mount.h | 1 + 2 files changed, 15 insertions(+), 10 deletions(-) diff --git a/fs/xfs/libxfs/xfs_ag_resv.c b/fs/xfs/libxfs/xfs_ag_resv.c index fdfe6dc0d307..8e454097b905 100644 --- a/fs/xfs/libxfs/xfs_ag_resv.c +++ b/fs/xfs/libxfs/xfs_ag_resv.c @@ -250,6 +250,7 @@ xfs_ag_resv_init( xfs_extlen_t ask; xfs_extlen_t used; int error = 0; + bool has_resv = false; /* Create the metadata reservation. */ if (pag->pag_meta_resv.ar_asked == 0) { @@ -287,6 +288,8 @@ xfs_ag_resv_init( if (error) goto out; } + if (ask) + has_resv = true; } /* Create the RMAPBT metadata reservation */ @@ -300,18 +303,19 @@ xfs_ag_resv_init( error = __xfs_ag_resv_init(pag, XFS_AG_RESV_RMAPBT, ask, used); if (error) goto out; + if (ask) + has_resv = true; } -#ifdef DEBUG - /* need to read in the AGF for the ASSERT below to work */ - error = xfs_alloc_pagf_init(pag->pag_mount, tp, pag->pag_agno, 0); - if (error) - return error; - - ASSERT(xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_reserved + - xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_reserved <= - pag->pagf_freeblks + pag->pagf_flcount); -#endif + if (has_resv) { + mp->m_has_agresv = true; + error = xfs_alloc_pagf_init(mp, tp, pag->pag_agno, 0); + if (error) + return error; + ASSERT(xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_reserved + + xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_reserved <= + pag->pagf_freeblks + pag->pagf_flcount); + } out: return error; } diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h index 659ad95fe3e0..489d9b2c53d9 100644 --- a/fs/xfs/xfs_mount.h +++ b/fs/xfs/xfs_mount.h @@ -139,6 +139,7 @@ typedef struct xfs_mount { bool m_fail_unmount; bool m_finobt_nores; /* no per-AG finobt resv. */ bool m_update_sb; /* sb needs update in mount */ + bool m_has_agresv; /* perag reservations active */ /* * Bitsets of per-fs metadata that have been checked and/or are sick. -- 2.26.2 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active 2021-03-18 16:17 ` [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active Brian Foster @ 2021-03-18 20:55 ` Dave Chinner 2021-03-18 22:19 ` Darrick J. Wong 0 siblings, 1 reply; 23+ messages in thread From: Dave Chinner @ 2021-03-18 20:55 UTC (permalink / raw) To: Brian Foster; +Cc: linux-xfs On Thu, Mar 18, 2021 at 12:17:06PM -0400, Brian Foster wrote: > perag reservation is enabled at mount time on a per AG basis. The > upcoming in-core allocation btree accounting mechanism needs to know > when reservation is enabled and that all perag AGF contexts are > initialized. As a preparation step, set a flag in the mount > structure and unconditionally initialize the pagf on all mounts > where at least one reservation is active. I'm not sure this is a good idea. AFAICT, this means just about any filesystem with finobt, reflink and/or rmap will now typically read every AGF header in the filesystem at mount time. That means pretty much every v5 filesystem in production... We've always tried to avoid needing to reading all AG headers at mount time because that does not scale when we have really large filesystems (I'm talking petabytes here). We should only read AG headers if there is something not fully recovered during the mount (i.e. slow path) and not on every mount. Needing to do a few thousand synchonous read IOs during mount makes mount very slow, and as such we always try to do dynamic instantiation of AG headers... Testing I've done with exabyte scale filesystems (>10^6 AGs) show that it can take minutes for mount to run when each AG header needs to be read, and that's on SSDs where the individual read latency is only a couple of hundred microseconds. On spinning disks that can do 200 IOPS, we're potentially talking hours just to mount really large filesystems... Hence I don't think that any algorithm that requires reading every AGF header in the filesystem at mount time on every v5 filesystem already out there in production (because finobt triggers this) is a particularly good idea... Cheers, Dave. -- Dave Chinner david@fromorbit.com ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active 2021-03-18 20:55 ` Dave Chinner @ 2021-03-18 22:19 ` Darrick J. Wong 2021-03-19 1:05 ` Dave Chinner 0 siblings, 1 reply; 23+ messages in thread From: Darrick J. Wong @ 2021-03-18 22:19 UTC (permalink / raw) To: Dave Chinner; +Cc: Brian Foster, linux-xfs On Fri, Mar 19, 2021 at 07:55:36AM +1100, Dave Chinner wrote: > On Thu, Mar 18, 2021 at 12:17:06PM -0400, Brian Foster wrote: > > perag reservation is enabled at mount time on a per AG basis. The > > upcoming in-core allocation btree accounting mechanism needs to know > > when reservation is enabled and that all perag AGF contexts are > > initialized. As a preparation step, set a flag in the mount > > structure and unconditionally initialize the pagf on all mounts > > where at least one reservation is active. > > I'm not sure this is a good idea. AFAICT, this means just about any > filesystem with finobt, reflink and/or rmap will now typically read > every AGF header in the filesystem at mount time. That means pretty > much every v5 filesystem in production... They already do that, because the AG headers are where we store the btree block counts. > We've always tried to avoid needing to reading all AG headers at > mount time because that does not scale when we have really large > filesystems (I'm talking petabytes here). We should only read AG > headers if there is something not fully recovered during the mount > (i.e. slow path) and not on every mount. > > Needing to do a few thousand synchonous read IOs during mount makes > mount very slow, and as such we always try to do dynamic > instantiation of AG headers... Testing I've done with exabyte scale > filesystems (>10^6 AGs) show that it can take minutes for mount to > run when each AG header needs to be read, and that's on SSDs where > the individual read latency is only a couple of hundred > microseconds. On spinning disks that can do 200 IOPS, we're > potentially talking hours just to mount really large filesystems... Is that with reflink enabled? Reflink always scans the right edge of the refcount btree at mount to clean out stale COW staging extents, and (prior to the introduction of the inode btree counts feature last year) we also ahad to walk the entire finobt to find out how big it is. TBH I think the COW recovery and the AG block reservation pieces are prime candidates for throwing at an xfs_pwork workqueue so we can perform those scans in parallel. > Hence I don't think that any algorithm that requires reading every > AGF header in the filesystem at mount time on every v5 filesystem > already out there in production (because finobt triggers this) is a > particularly good idea... Perhaps not, but the horse bolted 5 years ago. :/ --D > Cheers, > > Dave. > -- > Dave Chinner > david@fromorbit.com ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active 2021-03-18 22:19 ` Darrick J. Wong @ 2021-03-19 1:05 ` Dave Chinner 2021-03-19 1:34 ` Darrick J. Wong 2021-03-19 1:43 ` Dave Chinner 0 siblings, 2 replies; 23+ messages in thread From: Dave Chinner @ 2021-03-19 1:05 UTC (permalink / raw) To: Darrick J. Wong; +Cc: Brian Foster, linux-xfs On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote: > On Fri, Mar 19, 2021 at 07:55:36AM +1100, Dave Chinner wrote: > > On Thu, Mar 18, 2021 at 12:17:06PM -0400, Brian Foster wrote: > > > perag reservation is enabled at mount time on a per AG basis. The > > > upcoming in-core allocation btree accounting mechanism needs to know > > > when reservation is enabled and that all perag AGF contexts are > > > initialized. As a preparation step, set a flag in the mount > > > structure and unconditionally initialize the pagf on all mounts > > > where at least one reservation is active. > > > > I'm not sure this is a good idea. AFAICT, this means just about any > > filesystem with finobt, reflink and/or rmap will now typically read > > every AGF header in the filesystem at mount time. That means pretty > > much every v5 filesystem in production... > > They already do that, because the AG headers are where we store the > btree block counts. Oh, we're brute forcing AG reservation space? I thought we were doing something smarter than that, because I'm sure this isn't the first time I've mentioned this problem.... > > We've always tried to avoid needing to reading all AG headers at > > mount time because that does not scale when we have really large > > filesystems (I'm talking petabytes here). We should only read AG > > headers if there is something not fully recovered during the mount > > (i.e. slow path) and not on every mount. > > > > Needing to do a few thousand synchonous read IOs during mount makes > > mount very slow, and as such we always try to do dynamic > > instantiation of AG headers... Testing I've done with exabyte scale > > filesystems (>10^6 AGs) show that it can take minutes for mount to > > run when each AG header needs to be read, and that's on SSDs where > > the individual read latency is only a couple of hundred > > microseconds. On spinning disks that can do 200 IOPS, we're > > potentially talking hours just to mount really large filesystems... > > Is that with reflink enabled? Reflink always scans the right edge of > the refcount btree at mount to clean out stale COW staging extents, Aren't they cleaned up at unmount when the inode is inactivated? i.e. isn't this something that should only be done on a unclean mount? > and > (prior to the introduction of the inode btree counts feature last year) > we also ahad to walk the entire finobt to find out how big it is. ugh, I forgot about the fact we had to add that wart because we screwed up the space reservations for finobt operations... As for large scale testing, I suspect I turned everything optional off when I last did this testing, because mkfs currently requires a lot of per-AG IO to initialise structures. On an SSD, mkfs.xfs -K -f -d agcount=10000 ... takes mkfs time mount time -m crc=0 15s 1s -m rmapbt=1 25s 6s Multiply those times by at another 1000 to get to an 8EB filesystem and the difference is several hours of mkfs time and a couple of hours of mount time.... So from the numbers, it is pretty likely I didn't test anything that actually required iterating 8 million AGs at mount time.... > TBH I think the COW recovery and the AG block reservation pieces are > prime candidates for throwing at an xfs_pwork workqueue so we can > perform those scans in parallel. As I mentioned on #xfs, I think we only need to do the AG read if we are near enospc. i.e. we can take the entire reservation at mount time (which is fixed per-ag) and only take away the used from the reservation (i.e. return to the free space pool) when we actually access the AGF/AGI the first time. Or when we get a ENOSPC event, which might occur when we try to take the fixed reservation at mount time... > > Hence I don't think that any algorithm that requires reading every > > AGF header in the filesystem at mount time on every v5 filesystem > > already out there in production (because finobt triggers this) is a > > particularly good idea... > > Perhaps not, but the horse bolted 5 years ago. :/ Let's go catch it :P Cheers, Dave. -- Dave Chinner david@fromorbit.com ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active 2021-03-19 1:05 ` Dave Chinner @ 2021-03-19 1:34 ` Darrick J. Wong 2021-03-19 14:54 ` Brian Foster 2021-03-19 1:43 ` Dave Chinner 1 sibling, 1 reply; 23+ messages in thread From: Darrick J. Wong @ 2021-03-19 1:34 UTC (permalink / raw) To: Dave Chinner; +Cc: Brian Foster, linux-xfs On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote: > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote: > > On Fri, Mar 19, 2021 at 07:55:36AM +1100, Dave Chinner wrote: > > > On Thu, Mar 18, 2021 at 12:17:06PM -0400, Brian Foster wrote: > > > > perag reservation is enabled at mount time on a per AG basis. The > > > > upcoming in-core allocation btree accounting mechanism needs to know > > > > when reservation is enabled and that all perag AGF contexts are > > > > initialized. As a preparation step, set a flag in the mount > > > > structure and unconditionally initialize the pagf on all mounts > > > > where at least one reservation is active. > > > > > > I'm not sure this is a good idea. AFAICT, this means just about any > > > filesystem with finobt, reflink and/or rmap will now typically read > > > every AGF header in the filesystem at mount time. That means pretty > > > much every v5 filesystem in production... > > > > They already do that, because the AG headers are where we store the > > btree block counts. > > Oh, we're brute forcing AG reservation space? I thought we were > doing something smarter than that, because I'm sure this isn't the > first time I've mentioned this problem.... Probably not... :) > > > We've always tried to avoid needing to reading all AG headers at > > > mount time because that does not scale when we have really large > > > filesystems (I'm talking petabytes here). We should only read AG > > > headers if there is something not fully recovered during the mount > > > (i.e. slow path) and not on every mount. > > > > > > Needing to do a few thousand synchonous read IOs during mount makes > > > mount very slow, and as such we always try to do dynamic > > > instantiation of AG headers... Testing I've done with exabyte scale > > > filesystems (>10^6 AGs) show that it can take minutes for mount to > > > run when each AG header needs to be read, and that's on SSDs where > > > the individual read latency is only a couple of hundred > > > microseconds. On spinning disks that can do 200 IOPS, we're > > > potentially talking hours just to mount really large filesystems... > > > > Is that with reflink enabled? Reflink always scans the right edge of > > the refcount btree at mount to clean out stale COW staging extents, > > Aren't they cleaned up at unmount when the inode is inactivated? Yes. Or when the blockgc timeout expires, or when ENOSPC pushes blockgc... > i.e. isn't this something that should only be done on a unclean > mount? Years ago (back when reflink was experimental) we left it that way so that if there were any serious implementation bugs we wouldn't leak blocks everywhere. I think we forgot to take it out. > > and > > (prior to the introduction of the inode btree counts feature last year) > > we also ahad to walk the entire finobt to find out how big it is. > > ugh, I forgot about the fact we had to add that wart because we > screwed up the space reservations for finobt operations... Yeah. > As for large scale testing, I suspect I turned everything optional > off when I last did this testing, because mkfs currently requires a > lot of per-AG IO to initialise structures. On an SSD, mkfs.xfs > -K -f -d agcount=10000 ... takes > > mkfs time mount time > -m crc=0 15s 1s > -m rmapbt=1 25s 6s > > Multiply those times by at another 1000 to get to an 8EB > filesystem and the difference is several hours of mkfs time and > a couple of hours of mount time.... > > So from the numbers, it is pretty likely I didn't test anything that > actually required iterating 8 million AGs at mount time.... > > > TBH I think the COW recovery and the AG block reservation pieces are > > prime candidates for throwing at an xfs_pwork workqueue so we can > > perform those scans in parallel. [This didn't turn out to be difficult at all.] > As I mentioned on #xfs, I think we only need to do the AG read if we > are near enospc. i.e. we can take the entire reservation at mount > time (which is fixed per-ag) and only take away the used from the > reservation (i.e. return to the free space pool) when we actually > access the AGF/AGI the first time. Or when we get a ENOSPC > event, which might occur when we try to take the fixed reservation > at mount time... <nod> That's probably not hard. Compute the theoretical maximum size of the finobt/rmapbt/refcountbt, multiply that by the number of AGs, try to reserve that much, and if we get it, we can trivially initialise the per-AG reservation structure. If that fails, we fall back to the scanning thing we do now: When we set pag[if]_init in the per-AG structure, we can back off the space reservation by the number of blocks in the trees tracked by that AG header, which will add that quantity to fdblocks. We can handle the ENOSPC case by modifying the per-AG blockgc worker to load the AGF/AGI if they aren't already. > > > Hence I don't think that any algorithm that requires reading every > > > AGF header in the filesystem at mount time on every v5 filesystem > > > already out there in production (because finobt triggers this) is a > > > particularly good idea... > > > > Perhaps not, but the horse bolted 5 years ago. :/ > > Let's go catch it :P FWIW I previously fixed the rmapbt/reflink transaction reservations being unnecessarily large, so (provided deferred inode inactivation gets reviewed this cycle) I can try to put all these reflink cleanups together for the next cycle. --D > > Cheers, > > Dave. > -- > Dave Chinner > david@fromorbit.com ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active 2021-03-19 1:34 ` Darrick J. Wong @ 2021-03-19 14:54 ` Brian Foster 0 siblings, 0 replies; 23+ messages in thread From: Brian Foster @ 2021-03-19 14:54 UTC (permalink / raw) To: Darrick J. Wong; +Cc: Dave Chinner, linux-xfs On Thu, Mar 18, 2021 at 06:34:30PM -0700, Darrick J. Wong wrote: > On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote: > > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote: > > > On Fri, Mar 19, 2021 at 07:55:36AM +1100, Dave Chinner wrote: > > > > On Thu, Mar 18, 2021 at 12:17:06PM -0400, Brian Foster wrote: > > > > > perag reservation is enabled at mount time on a per AG basis. The > > > > > upcoming in-core allocation btree accounting mechanism needs to know > > > > > when reservation is enabled and that all perag AGF contexts are > > > > > initialized. As a preparation step, set a flag in the mount > > > > > structure and unconditionally initialize the pagf on all mounts > > > > > where at least one reservation is active. > > > > > > > > I'm not sure this is a good idea. AFAICT, this means just about any > > > > filesystem with finobt, reflink and/or rmap will now typically read > > > > every AGF header in the filesystem at mount time. That means pretty > > > > much every v5 filesystem in production... > > > > > > They already do that, because the AG headers are where we store the > > > btree block counts. > > > > Oh, we're brute forcing AG reservation space? I thought we were > > doing something smarter than that, because I'm sure this isn't the > > first time I've mentioned this problem.... > > Probably not... :) > > > > > We've always tried to avoid needing to reading all AG headers at > > > > mount time because that does not scale when we have really large > > > > filesystems (I'm talking petabytes here). We should only read AG > > > > headers if there is something not fully recovered during the mount > > > > (i.e. slow path) and not on every mount. > > > > > > > > Needing to do a few thousand synchonous read IOs during mount makes > > > > mount very slow, and as such we always try to do dynamic > > > > instantiation of AG headers... Testing I've done with exabyte scale > > > > filesystems (>10^6 AGs) show that it can take minutes for mount to > > > > run when each AG header needs to be read, and that's on SSDs where > > > > the individual read latency is only a couple of hundred > > > > microseconds. On spinning disks that can do 200 IOPS, we're > > > > potentially talking hours just to mount really large filesystems... > > > > > > Is that with reflink enabled? Reflink always scans the right edge of > > > the refcount btree at mount to clean out stale COW staging extents, > > > > Aren't they cleaned up at unmount when the inode is inactivated? > > Yes. Or when the blockgc timeout expires, or when ENOSPC pushes > blockgc... > > > i.e. isn't this something that should only be done on a unclean > > mount? > > Years ago (back when reflink was experimental) we left it that way so > that if there were any serious implementation bugs we wouldn't leak > blocks everywhere. I think we forgot to take it out. > > > > and > > > (prior to the introduction of the inode btree counts feature last year) > > > we also ahad to walk the entire finobt to find out how big it is. > > > > ugh, I forgot about the fact we had to add that wart because we > > screwed up the space reservations for finobt operations... > > Yeah. > > > As for large scale testing, I suspect I turned everything optional > > off when I last did this testing, because mkfs currently requires a > > lot of per-AG IO to initialise structures. On an SSD, mkfs.xfs > > -K -f -d agcount=10000 ... takes > > > > mkfs time mount time > > -m crc=0 15s 1s > > -m rmapbt=1 25s 6s > > > > Multiply those times by at another 1000 to get to an 8EB > > filesystem and the difference is several hours of mkfs time and > > a couple of hours of mount time.... > > > > So from the numbers, it is pretty likely I didn't test anything that > > actually required iterating 8 million AGs at mount time.... > > > > > TBH I think the COW recovery and the AG block reservation pieces are > > > prime candidates for throwing at an xfs_pwork workqueue so we can > > > perform those scans in parallel. > > [This didn't turn out to be difficult at all.] > > > As I mentioned on #xfs, I think we only need to do the AG read if we > > are near enospc. i.e. we can take the entire reservation at mount > > time (which is fixed per-ag) and only take away the used from the > > reservation (i.e. return to the free space pool) when we actually > > access the AGF/AGI the first time. Or when we get a ENOSPC > > event, which might occur when we try to take the fixed reservation > > at mount time... > > <nod> That's probably not hard. Compute the theoretical maximum size of > the finobt/rmapbt/refcountbt, multiply that by the number of AGs, try to > reserve that much, and if we get it, we can trivially initialise the > per-AG reservation structure. If that fails, we fall back to the > scanning thing we do now: > Even in the failure case, we might be able to limit the mount time scanning that takes place by just scanning until we've found enough AGs with consumed reservation such that the mount time estimated reservation succeeds. Of course, the worst case would always be a full scan (either due to -ENOSPC on the res or very shortly after mount since the res might have left the fs near -ENOSPC) so it might not be worth it unless there's value and the logic is simple.. Brian > When we set pag[if]_init in the per-AG structure, we can back off the > space reservation by the number of blocks in the trees tracked by that > AG header, which will add that quantity to fdblocks. We can handle the > ENOSPC case by modifying the per-AG blockgc worker to load the AGF/AGI > if they aren't already. > > > > > Hence I don't think that any algorithm that requires reading every > > > > AGF header in the filesystem at mount time on every v5 filesystem > > > > already out there in production (because finobt triggers this) is a > > > > particularly good idea... > > > > > > Perhaps not, but the horse bolted 5 years ago. :/ > > > > Let's go catch it :P > > FWIW I previously fixed the rmapbt/reflink transaction reservations > being unnecessarily large, so (provided deferred inode inactivation gets > reviewed this cycle) I can try to put all these reflink cleanups > together for the next cycle. > > --D > > > > > Cheers, > > > > Dave. > > -- > > Dave Chinner > > david@fromorbit.com > ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active 2021-03-19 1:05 ` Dave Chinner 2021-03-19 1:34 ` Darrick J. Wong @ 2021-03-19 1:43 ` Dave Chinner 2021-03-19 1:48 ` Darrick J. Wong 2021-03-19 14:54 ` Brian Foster 1 sibling, 2 replies; 23+ messages in thread From: Dave Chinner @ 2021-03-19 1:43 UTC (permalink / raw) To: Darrick J. Wong; +Cc: Brian Foster, linux-xfs On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote: > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote: > > TBH I think the COW recovery and the AG block reservation pieces are > > prime candidates for throwing at an xfs_pwork workqueue so we can > > perform those scans in parallel. > > As I mentioned on #xfs, I think we only need to do the AG read if we > are near enospc. i.e. we can take the entire reservation at mount > time (which is fixed per-ag) and only take away the used from the > reservation (i.e. return to the free space pool) when we actually > access the AGF/AGI the first time. Or when we get a ENOSPC > event, which might occur when we try to take the fixed reservation > at mount time... Which leaves the question about when we need to actually do the accounting needed to fix the bug Brian is trying to fix. Can that be delayed until we read the AGFs or have an ENOSPC event occur? Or maybe some other "we are near ENOSPC and haven't read all AGFs yet" threshold/trigger? If that's the case, then I'm happy to have this patchset proceed as it stands under the understanding that there will be follow up to make the clean, lots of space free mount case avoid reading the the AG headers. If it can't be made constrained, then I think we probably need to come up with a different approach that doesn't require reading every AG header on every mount... Cheers, Dave. -- Dave Chinner david@fromorbit.com ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active 2021-03-19 1:43 ` Dave Chinner @ 2021-03-19 1:48 ` Darrick J. Wong 2021-03-19 2:08 ` Dave Chinner 2021-03-19 14:54 ` Brian Foster 1 sibling, 1 reply; 23+ messages in thread From: Darrick J. Wong @ 2021-03-19 1:48 UTC (permalink / raw) To: Dave Chinner; +Cc: Brian Foster, linux-xfs On Fri, Mar 19, 2021 at 12:43:03PM +1100, Dave Chinner wrote: > On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote: > > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote: > > > TBH I think the COW recovery and the AG block reservation pieces are > > > prime candidates for throwing at an xfs_pwork workqueue so we can > > > perform those scans in parallel. > > > > As I mentioned on #xfs, I think we only need to do the AG read if we > > are near enospc. i.e. we can take the entire reservation at mount > > time (which is fixed per-ag) and only take away the used from the > > reservation (i.e. return to the free space pool) when we actually > > access the AGF/AGI the first time. Or when we get a ENOSPC > > event, which might occur when we try to take the fixed reservation > > at mount time... > > Which leaves the question about when we need to actually do the > accounting needed to fix the bug Brian is trying to fix. Can that be > delayed until we read the AGFs or have an ENOSPC event occur? Or > maybe some other "we are near ENOSPC and haven't read all AGFs yet" > threshold/trigger? Or just load them in the background and let mount() return to userspace? > If that's the case, then I'm happy to have this patchset proceed as > it stands under the understanding that there will be follow up to > make the clean, lots of space free mount case avoid reading the the > AG headers. > > If it can't be made constrained, then I think we probably need to > come up with a different approach that doesn't require reading every > AG header on every mount... > > Cheers, > > Dave. > -- > Dave Chinner > david@fromorbit.com ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active 2021-03-19 1:48 ` Darrick J. Wong @ 2021-03-19 2:08 ` Dave Chinner 0 siblings, 0 replies; 23+ messages in thread From: Dave Chinner @ 2021-03-19 2:08 UTC (permalink / raw) To: Darrick J. Wong; +Cc: Brian Foster, linux-xfs On Thu, Mar 18, 2021 at 06:48:21PM -0700, Darrick J. Wong wrote: > On Fri, Mar 19, 2021 at 12:43:03PM +1100, Dave Chinner wrote: > > On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote: > > > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote: > > > > TBH I think the COW recovery and the AG block reservation pieces are > > > > prime candidates for throwing at an xfs_pwork workqueue so we can > > > > perform those scans in parallel. > > > > > > As I mentioned on #xfs, I think we only need to do the AG read if we > > > are near enospc. i.e. we can take the entire reservation at mount > > > time (which is fixed per-ag) and only take away the used from the > > > reservation (i.e. return to the free space pool) when we actually > > > access the AGF/AGI the first time. Or when we get a ENOSPC > > > event, which might occur when we try to take the fixed reservation > > > at mount time... > > > > Which leaves the question about when we need to actually do the > > accounting needed to fix the bug Brian is trying to fix. Can that be > > delayed until we read the AGFs or have an ENOSPC event occur? Or > > maybe some other "we are near ENOSPC and haven't read all AGFs yet" > > threshold/trigger? > > Or just load them in the background and let mount() return to userspace? Perhaps, but that tends to have impacts on things that run immediately after mount. e.g. it will screw with benchmarks in unpredictable ways and I'm not going to like that at all. :( i.e. I like the deterministic, repeatable behaviour we have right now because it makes back-to-back performance testing easy to reason about why performance/behaviour changed... Cheers, Dave. -- Dave Chinner david@fromorbit.com ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active 2021-03-19 1:43 ` Dave Chinner 2021-03-19 1:48 ` Darrick J. Wong @ 2021-03-19 14:54 ` Brian Foster 2021-03-23 22:40 ` Dave Chinner 1 sibling, 1 reply; 23+ messages in thread From: Brian Foster @ 2021-03-19 14:54 UTC (permalink / raw) To: Dave Chinner; +Cc: Darrick J. Wong, linux-xfs On Fri, Mar 19, 2021 at 12:43:03PM +1100, Dave Chinner wrote: > On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote: > > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote: > > > TBH I think the COW recovery and the AG block reservation pieces are > > > prime candidates for throwing at an xfs_pwork workqueue so we can > > > perform those scans in parallel. > > > > As I mentioned on #xfs, I think we only need to do the AG read if we > > are near enospc. i.e. we can take the entire reservation at mount > > time (which is fixed per-ag) and only take away the used from the > > reservation (i.e. return to the free space pool) when we actually > > access the AGF/AGI the first time. Or when we get a ENOSPC > > event, which might occur when we try to take the fixed reservation > > at mount time... > > Which leaves the question about when we need to actually do the > accounting needed to fix the bug Brian is trying to fix. Can that be > delayed until we read the AGFs or have an ENOSPC event occur? Or > maybe some other "we are near ENOSPC and haven't read all AGFs yet" > threshold/trigger? > Technically there isn't a hard requirement to read in any AGFs at mount time. The tradeoff is that leaves a gap in effectiveness until at least the majority of allocbt blocks have been accounted for (via perag agf initialization). The in-core counter simply folds into the reservation set aside value, so it would just remain at 0 at reservation time and behave as if the mechanism didn't exist in the first place. The obvious risk is a user can mount the fs and immediately acquire reservation without having populated the counter from enough AGs to prevent the reservation overrun problem. For that reason, I didn't really consider the "lazy" init approach a suitable fix and hooked onto the (mostly) preexisting perag res behavior to initialize the appropriate structures at mount time. If that underlying mount time behavior changes, it's not totally clear to me how that impacts this patch. If the perag res change relies on an overestimated mount time reservation and a fallback to a hard scan on -ENOSPC, then I wonder whether the overestimated reservation might effectively subsume whatever the allocbt set aside might be for that AG. If so, and the perag init effectively transfers excess reservation back to free space at the same time allocbt blocks are accounted for (and set aside from subsequent reservations), perhaps that has a similar net effect as the current behavior (of initializing the allocbt count at mount time)..? One problem is that might be hard to reason about even with code in place, let alone right now when the targeted behavior is still vaporware. OTOH, I suppose that if we do know right now that the perag res scan will still fall back to mount time scans beyond some low free space threshold, perhaps it's just a matter of factoring allocbt set aside into the threshold somehow so that we know the counter will always be initialized before a user can over reserve blocks. As it is, I don't really have a strong opinion on whether we should try to make this fix now and preserve it, or otherwise table it and revisit once we know what the resulting perag res code will look like. Thoughts? Brian > If that's the case, then I'm happy to have this patchset proceed as > it stands under the understanding that there will be follow up to > make the clean, lots of space free mount case avoid reading the the > AG headers. > > If it can't be made constrained, then I think we probably need to > come up with a different approach that doesn't require reading every > AG header on every mount... > > Cheers, > > Dave. > -- > Dave Chinner > david@fromorbit.com > ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active 2021-03-19 14:54 ` Brian Foster @ 2021-03-23 22:40 ` Dave Chinner 2021-03-24 14:24 ` Brian Foster 0 siblings, 1 reply; 23+ messages in thread From: Dave Chinner @ 2021-03-23 22:40 UTC (permalink / raw) To: Brian Foster; +Cc: Darrick J. Wong, linux-xfs On Fri, Mar 19, 2021 at 10:54:25AM -0400, Brian Foster wrote: > On Fri, Mar 19, 2021 at 12:43:03PM +1100, Dave Chinner wrote: > > On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote: > > > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote: > > > > TBH I think the COW recovery and the AG block reservation pieces are > > > > prime candidates for throwing at an xfs_pwork workqueue so we can > > > > perform those scans in parallel. > > > > > > As I mentioned on #xfs, I think we only need to do the AG read if we > > > are near enospc. i.e. we can take the entire reservation at mount > > > time (which is fixed per-ag) and only take away the used from the > > > reservation (i.e. return to the free space pool) when we actually > > > access the AGF/AGI the first time. Or when we get a ENOSPC > > > event, which might occur when we try to take the fixed reservation > > > at mount time... > > > > Which leaves the question about when we need to actually do the > > accounting needed to fix the bug Brian is trying to fix. Can that be > > delayed until we read the AGFs or have an ENOSPC event occur? Or > > maybe some other "we are near ENOSPC and haven't read all AGFs yet" > > threshold/trigger? > > > > Technically there isn't a hard requirement to read in any AGFs at mount > time. The tradeoff is that leaves a gap in effectiveness until at least > the majority of allocbt blocks have been accounted for (via perag agf > initialization). The in-core counter simply folds into the reservation > set aside value, so it would just remain at 0 at reservation time and > behave as if the mechanism didn't exist in the first place. The obvious > risk is a user can mount the fs and immediately acquire reservation > without having populated the counter from enough AGs to prevent the > reservation overrun problem. For that reason, I didn't really consider > the "lazy" init approach a suitable fix and hooked onto the (mostly) > preexisting perag res behavior to initialize the appropriate structures > at mount time. > > If that underlying mount time behavior changes, it's not totally clear > to me how that impacts this patch. If the perag res change relies on an > overestimated mount time reservation and a fallback to a hard scan on > -ENOSPC, then I wonder whether the overestimated reservation might > effectively subsume whatever the allocbt set aside might be for that AG. > If so, and the perag init effectively transfers excess reservation back > to free space at the same time allocbt blocks are accounted for (and set > aside from subsequent reservations), perhaps that has a similar net > effect as the current behavior (of initializing the allocbt count at > mount time)..? > > One problem is that might be hard to reason about even with code in > place, let alone right now when the targeted behavior is still > vaporware. OTOH, I suppose that if we do know right now that the perag > res scan will still fall back to mount time scans beyond some low free > space threshold, perhaps it's just a matter of factoring allocbt set > aside into the threshold somehow so that we know the counter will always > be initialized before a user can over reserve blocks. Yeah, that seems reasonable to me. I don't think it's difficult to handle - just set the setaside to maximum at mount time, then as we read in AGFs we replace the maximum setaside for that AG with the actual btree block usage. If we hit ENOSPC, then we can read in the uninitialised pags to reduce the setaside from the maximum to the actual values and return that free space back to the global pool... > As it is, I don't > really have a strong opinion on whether we should try to make this fix > now and preserve it, or otherwise table it and revisit once we know what > the resulting perag res code will look like. Thoughts? It sounds like we have a solid plan to address the AG header access at mount time, adding this code now doesn't make anything worse, nor does it appear to prevent us from fixing the AG header access problem in the future. So I'm happy for this fix to go ahead as it stands. Cheers, Dave. -- Dave Chinner david@fromorbit.com ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active 2021-03-23 22:40 ` Dave Chinner @ 2021-03-24 14:24 ` Brian Foster 0 siblings, 0 replies; 23+ messages in thread From: Brian Foster @ 2021-03-24 14:24 UTC (permalink / raw) To: Dave Chinner; +Cc: Darrick J. Wong, linux-xfs On Wed, Mar 24, 2021 at 09:40:36AM +1100, Dave Chinner wrote: > On Fri, Mar 19, 2021 at 10:54:25AM -0400, Brian Foster wrote: > > On Fri, Mar 19, 2021 at 12:43:03PM +1100, Dave Chinner wrote: > > > On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote: > > > > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote: > > > > > TBH I think the COW recovery and the AG block reservation pieces are > > > > > prime candidates for throwing at an xfs_pwork workqueue so we can > > > > > perform those scans in parallel. > > > > > > > > As I mentioned on #xfs, I think we only need to do the AG read if we > > > > are near enospc. i.e. we can take the entire reservation at mount > > > > time (which is fixed per-ag) and only take away the used from the > > > > reservation (i.e. return to the free space pool) when we actually > > > > access the AGF/AGI the first time. Or when we get a ENOSPC > > > > event, which might occur when we try to take the fixed reservation > > > > at mount time... > > > > > > Which leaves the question about when we need to actually do the > > > accounting needed to fix the bug Brian is trying to fix. Can that be > > > delayed until we read the AGFs or have an ENOSPC event occur? Or > > > maybe some other "we are near ENOSPC and haven't read all AGFs yet" > > > threshold/trigger? > > > > > > > Technically there isn't a hard requirement to read in any AGFs at mount > > time. The tradeoff is that leaves a gap in effectiveness until at least > > the majority of allocbt blocks have been accounted for (via perag agf > > initialization). The in-core counter simply folds into the reservation > > set aside value, so it would just remain at 0 at reservation time and > > behave as if the mechanism didn't exist in the first place. The obvious > > risk is a user can mount the fs and immediately acquire reservation > > without having populated the counter from enough AGs to prevent the > > reservation overrun problem. For that reason, I didn't really consider > > the "lazy" init approach a suitable fix and hooked onto the (mostly) > > preexisting perag res behavior to initialize the appropriate structures > > at mount time. > > > > If that underlying mount time behavior changes, it's not totally clear > > to me how that impacts this patch. If the perag res change relies on an > > overestimated mount time reservation and a fallback to a hard scan on > > -ENOSPC, then I wonder whether the overestimated reservation might > > effectively subsume whatever the allocbt set aside might be for that AG. > > If so, and the perag init effectively transfers excess reservation back > > to free space at the same time allocbt blocks are accounted for (and set > > aside from subsequent reservations), perhaps that has a similar net > > effect as the current behavior (of initializing the allocbt count at > > mount time)..? > > > > One problem is that might be hard to reason about even with code in > > place, let alone right now when the targeted behavior is still > > vaporware. OTOH, I suppose that if we do know right now that the perag > > res scan will still fall back to mount time scans beyond some low free > > space threshold, perhaps it's just a matter of factoring allocbt set > > aside into the threshold somehow so that we know the counter will always > > be initialized before a user can over reserve blocks. > > Yeah, that seems reasonable to me. I don't think it's difficult to > handle - just set the setaside to maximum at mount time, then as we > read in AGFs we replace the maximum setaside for that AG with the > actual btree block usage. If we hit ENOSPC, then we can read in the > uninitialised pags to reduce the setaside from the maximum to the > actual values and return that free space back to the global pool... > Ack. That seems like a generic enough fallback plan if the overestimation of perag reservation doesn't otherwise cover the gap. > > As it is, I don't > > really have a strong opinion on whether we should try to make this fix > > now and preserve it, or otherwise table it and revisit once we know what > > the resulting perag res code will look like. Thoughts? > > It sounds like we have a solid plan to address the AG header access > at mount time, adding this code now doesn't make anything worse, > nor does it appear to prevent us from fixing the AG header access > problem in the future. So I'm happy for this fix to go ahead as it > stands. > Ok, so is that a Rv-b..? ;) So far after a quick skim back through the discussion, I don't have a reason for a v4 of this series... Brian > Cheers, > > Dave. > -- > Dave Chinner > david@fromorbit.com > ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation 2021-03-18 16:17 [PATCH v3 0/2] xfs: set aside allocation btree blocks from block reservation Brian Foster 2021-03-18 16:17 ` [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active Brian Foster @ 2021-03-18 16:17 ` Brian Foster 2021-03-18 20:31 ` Darrick J. Wong 2021-04-09 14:17 ` [PATCH v3 0/2] " Brian Foster 2 siblings, 1 reply; 23+ messages in thread From: Brian Foster @ 2021-03-18 16:17 UTC (permalink / raw) To: linux-xfs The blocks used for allocation btrees (bnobt and countbt) are technically considered free space. This is because as free space is used, allocbt blocks are removed and naturally become available for traditional allocation. However, this means that a significant portion of free space may consist of in-use btree blocks if free space is severely fragmented. On large filesystems with large perag reservations, this can lead to a rare but nasty condition where a significant amount of physical free space is available, but the majority of actual usable blocks consist of in-use allocbt blocks. We have a record of a (~12TB, 32 AG) filesystem with multiple AGs in a state with ~2.5GB or so free blocks tracked across ~300 total allocbt blocks, but effectively at 100% full because the the free space is entirely consumed by refcountbt perag reservation. Such a large perag reservation is by design on large filesystems. The problem is that because the free space is so fragmented, this AG contributes the 300 or so allocbt blocks to the global counters as free space. If this pattern repeats across enough AGs, the filesystem lands in a state where global block reservation can outrun physical block availability. For example, a streaming buffered write on the affected filesystem continues to allow delayed allocation beyond the point where writeback starts to fail due to physical block allocation failures. The expected behavior is for the delalloc block reservation to fail gracefully with -ENOSPC before physical block allocation failure is a possibility. To address this problem, introduce an in-core counter to track the sum of all allocbt blocks in use by the filesystem. Use the new counter to set these blocks aside at reservation time and thus ensure they cannot be reserved until truly available. Since this is only necessary when perag reservations are active and the counter requires a read of each AGF to fully populate, only enforce on perag res enabled filesystems. This allows initialization of the counter at ->pagf_init time because the perag reservation init code reads each AGF at mount time. Signed-off-by: Brian Foster <bfoster@redhat.com> --- fs/xfs/libxfs/xfs_alloc.c | 12 ++++++++++++ fs/xfs/libxfs/xfs_alloc_btree.c | 2 ++ fs/xfs/xfs_mount.c | 18 +++++++++++++++++- fs/xfs/xfs_mount.h | 6 ++++++ 4 files changed, 37 insertions(+), 1 deletion(-) diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c index 0c623d3c1036..9fa378d2724e 100644 --- a/fs/xfs/libxfs/xfs_alloc.c +++ b/fs/xfs/libxfs/xfs_alloc.c @@ -3036,6 +3036,7 @@ xfs_alloc_read_agf( struct xfs_agf *agf; /* ag freelist header */ struct xfs_perag *pag; /* per allocation group data */ int error; + uint32_t allocbt_blks; trace_xfs_alloc_read_agf(mp, agno); @@ -3066,6 +3067,17 @@ xfs_alloc_read_agf( pag->pagf_refcount_level = be32_to_cpu(agf->agf_refcount_level); pag->pagf_init = 1; pag->pagf_agflreset = xfs_agfl_needs_reset(mp, agf); + + /* + * Update the global in-core allocbt block counter. Filter + * rmapbt blocks from the on-disk counter because those are + * managed by perag reservation. + */ + if (pag->pagf_btreeblks > be32_to_cpu(agf->agf_rmap_blocks)) { + allocbt_blks = pag->pagf_btreeblks - + be32_to_cpu(agf->agf_rmap_blocks); + atomic64_add(allocbt_blks, &mp->m_allocbt_blks); + } } #ifdef DEBUG else if (!XFS_FORCED_SHUTDOWN(mp)) { diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c index 8e01231b308e..9f5a45f7baed 100644 --- a/fs/xfs/libxfs/xfs_alloc_btree.c +++ b/fs/xfs/libxfs/xfs_alloc_btree.c @@ -71,6 +71,7 @@ xfs_allocbt_alloc_block( return 0; } + atomic64_inc(&cur->bc_mp->m_allocbt_blks); xfs_extent_busy_reuse(cur->bc_mp, cur->bc_ag.agno, bno, 1, false); xfs_trans_agbtree_delta(cur->bc_tp, 1); @@ -95,6 +96,7 @@ xfs_allocbt_free_block( if (error) return error; + atomic64_dec(&cur->bc_mp->m_allocbt_blks); xfs_extent_busy_insert(cur->bc_tp, be32_to_cpu(agf->agf_seqno), bno, 1, XFS_EXTENT_BUSY_SKIP_DISCARD); xfs_trans_agbtree_delta(cur->bc_tp, -1); diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index 1c97b155a8ee..29f97e909607 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -1176,6 +1176,7 @@ xfs_mod_fdblocks( int64_t lcounter; long long res_used; s32 batch; + uint64_t set_aside; if (delta > 0) { /* @@ -1215,8 +1216,23 @@ xfs_mod_fdblocks( else batch = XFS_FDBLOCKS_BATCH; + /* + * Set aside allocbt blocks because these blocks are tracked as free + * space but not available for allocation. Technically this means that a + * single reservation cannot consume all remaining free space, but the + * ratio of allocbt blocks to usable free blocks should be rather small. + * The tradeoff without this is that filesystems that maintain high + * perag block reservations can over reserve physical block availability + * and fail physical allocation, which leads to much more serious + * problems (i.e. transaction abort, pagecache discards, etc.) than + * slightly premature -ENOSPC. + */ + set_aside = mp->m_alloc_set_aside; + if (mp->m_has_agresv) + set_aside += atomic64_read(&mp->m_allocbt_blks); + percpu_counter_add_batch(&mp->m_fdblocks, delta, batch); - if (__percpu_counter_compare(&mp->m_fdblocks, mp->m_alloc_set_aside, + if (__percpu_counter_compare(&mp->m_fdblocks, set_aside, XFS_FDBLOCKS_BATCH) >= 0) { /* we had space! */ return 0; diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h index 489d9b2c53d9..041f437dc117 100644 --- a/fs/xfs/xfs_mount.h +++ b/fs/xfs/xfs_mount.h @@ -171,6 +171,12 @@ typedef struct xfs_mount { * extents or anything related to the rt device. */ struct percpu_counter m_delalloc_blks; + /* + * Global count of allocation btree blocks in use across all AGs. Only + * used when perag reservation is enabled. Helps prevent block + * reservation from attempting to reserve allocation btree blocks. + */ + atomic64_t m_allocbt_blks; struct radix_tree_root m_perag_tree; /* per-ag accounting info */ spinlock_t m_perag_lock; /* lock for m_perag_tree */ -- 2.26.2 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation 2021-03-18 16:17 ` [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation Brian Foster @ 2021-03-18 20:31 ` Darrick J. Wong 2021-03-19 15:00 ` Brian Foster 0 siblings, 1 reply; 23+ messages in thread From: Darrick J. Wong @ 2021-03-18 20:31 UTC (permalink / raw) To: Brian Foster; +Cc: linux-xfs On Thu, Mar 18, 2021 at 12:17:07PM -0400, Brian Foster wrote: > The blocks used for allocation btrees (bnobt and countbt) are > technically considered free space. This is because as free space is > used, allocbt blocks are removed and naturally become available for > traditional allocation. However, this means that a significant > portion of free space may consist of in-use btree blocks if free > space is severely fragmented. > > On large filesystems with large perag reservations, this can lead to > a rare but nasty condition where a significant amount of physical > free space is available, but the majority of actual usable blocks > consist of in-use allocbt blocks. We have a record of a (~12TB, 32 > AG) filesystem with multiple AGs in a state with ~2.5GB or so free > blocks tracked across ~300 total allocbt blocks, but effectively at > 100% full because the the free space is entirely consumed by > refcountbt perag reservation. > > Such a large perag reservation is by design on large filesystems. > The problem is that because the free space is so fragmented, this AG > contributes the 300 or so allocbt blocks to the global counters as > free space. If this pattern repeats across enough AGs, the > filesystem lands in a state where global block reservation can > outrun physical block availability. For example, a streaming > buffered write on the affected filesystem continues to allow delayed > allocation beyond the point where writeback starts to fail due to > physical block allocation failures. The expected behavior is for the > delalloc block reservation to fail gracefully with -ENOSPC before > physical block allocation failure is a possibility. > > To address this problem, introduce an in-core counter to track the > sum of all allocbt blocks in use by the filesystem. Use the new > counter to set these blocks aside at reservation time and thus > ensure they cannot be reserved until truly available. Since this is > only necessary when perag reservations are active and the counter > requires a read of each AGF to fully populate, only enforce on perag > res enabled filesystems. This allows initialization of the counter > at ->pagf_init time because the perag reservation init code reads > each AGF at mount time. > > Signed-off-by: Brian Foster <bfoster@redhat.com> > --- > fs/xfs/libxfs/xfs_alloc.c | 12 ++++++++++++ > fs/xfs/libxfs/xfs_alloc_btree.c | 2 ++ > fs/xfs/xfs_mount.c | 18 +++++++++++++++++- > fs/xfs/xfs_mount.h | 6 ++++++ > 4 files changed, 37 insertions(+), 1 deletion(-) > > diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c > index 0c623d3c1036..9fa378d2724e 100644 > --- a/fs/xfs/libxfs/xfs_alloc.c > +++ b/fs/xfs/libxfs/xfs_alloc.c > @@ -3036,6 +3036,7 @@ xfs_alloc_read_agf( > struct xfs_agf *agf; /* ag freelist header */ > struct xfs_perag *pag; /* per allocation group data */ > int error; > + uint32_t allocbt_blks; > > trace_xfs_alloc_read_agf(mp, agno); > > @@ -3066,6 +3067,17 @@ xfs_alloc_read_agf( > pag->pagf_refcount_level = be32_to_cpu(agf->agf_refcount_level); > pag->pagf_init = 1; > pag->pagf_agflreset = xfs_agfl_needs_reset(mp, agf); > + > + /* > + * Update the global in-core allocbt block counter. Filter > + * rmapbt blocks from the on-disk counter because those are > + * managed by perag reservation. > + */ > + if (pag->pagf_btreeblks > be32_to_cpu(agf->agf_rmap_blocks)) { > + allocbt_blks = pag->pagf_btreeblks - > + be32_to_cpu(agf->agf_rmap_blocks); > + atomic64_add(allocbt_blks, &mp->m_allocbt_blks); Does growfs call xfs_alloc_read_agf to set pagf_init in the perag structure when it adds AGs to the filesystem? I don't /think/ that's a problem for this use case (since allocbt_blks should be 0 on a freshly initialized AG) but i was a little surprised to see block reservation bits here. The other thing is that online repair (what little of it is in the kernel currently) can set pagf_init = 0; is that a problem? --D > + } > } > #ifdef DEBUG > else if (!XFS_FORCED_SHUTDOWN(mp)) { > diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c > index 8e01231b308e..9f5a45f7baed 100644 > --- a/fs/xfs/libxfs/xfs_alloc_btree.c > +++ b/fs/xfs/libxfs/xfs_alloc_btree.c > @@ -71,6 +71,7 @@ xfs_allocbt_alloc_block( > return 0; > } > > + atomic64_inc(&cur->bc_mp->m_allocbt_blks); > xfs_extent_busy_reuse(cur->bc_mp, cur->bc_ag.agno, bno, 1, false); > > xfs_trans_agbtree_delta(cur->bc_tp, 1); > @@ -95,6 +96,7 @@ xfs_allocbt_free_block( > if (error) > return error; > > + atomic64_dec(&cur->bc_mp->m_allocbt_blks); > xfs_extent_busy_insert(cur->bc_tp, be32_to_cpu(agf->agf_seqno), bno, 1, > XFS_EXTENT_BUSY_SKIP_DISCARD); > xfs_trans_agbtree_delta(cur->bc_tp, -1); > diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c > index 1c97b155a8ee..29f97e909607 100644 > --- a/fs/xfs/xfs_mount.c > +++ b/fs/xfs/xfs_mount.c > @@ -1176,6 +1176,7 @@ xfs_mod_fdblocks( > int64_t lcounter; > long long res_used; > s32 batch; > + uint64_t set_aside; > > if (delta > 0) { > /* > @@ -1215,8 +1216,23 @@ xfs_mod_fdblocks( > else > batch = XFS_FDBLOCKS_BATCH; > > + /* > + * Set aside allocbt blocks because these blocks are tracked as free > + * space but not available for allocation. Technically this means that a > + * single reservation cannot consume all remaining free space, but the > + * ratio of allocbt blocks to usable free blocks should be rather small. > + * The tradeoff without this is that filesystems that maintain high > + * perag block reservations can over reserve physical block availability > + * and fail physical allocation, which leads to much more serious > + * problems (i.e. transaction abort, pagecache discards, etc.) than > + * slightly premature -ENOSPC. > + */ > + set_aside = mp->m_alloc_set_aside; > + if (mp->m_has_agresv) > + set_aside += atomic64_read(&mp->m_allocbt_blks); > + > percpu_counter_add_batch(&mp->m_fdblocks, delta, batch); > - if (__percpu_counter_compare(&mp->m_fdblocks, mp->m_alloc_set_aside, > + if (__percpu_counter_compare(&mp->m_fdblocks, set_aside, > XFS_FDBLOCKS_BATCH) >= 0) { > /* we had space! */ > return 0; > diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h > index 489d9b2c53d9..041f437dc117 100644 > --- a/fs/xfs/xfs_mount.h > +++ b/fs/xfs/xfs_mount.h > @@ -171,6 +171,12 @@ typedef struct xfs_mount { > * extents or anything related to the rt device. > */ > struct percpu_counter m_delalloc_blks; > + /* > + * Global count of allocation btree blocks in use across all AGs. Only > + * used when perag reservation is enabled. Helps prevent block > + * reservation from attempting to reserve allocation btree blocks. > + */ > + atomic64_t m_allocbt_blks; > > struct radix_tree_root m_perag_tree; /* per-ag accounting info */ > spinlock_t m_perag_lock; /* lock for m_perag_tree */ > -- > 2.26.2 > ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation 2021-03-18 20:31 ` Darrick J. Wong @ 2021-03-19 15:00 ` Brian Foster 2021-03-27 1:34 ` Darrick J. Wong 0 siblings, 1 reply; 23+ messages in thread From: Brian Foster @ 2021-03-19 15:00 UTC (permalink / raw) To: Darrick J. Wong; +Cc: linux-xfs On Thu, Mar 18, 2021 at 01:31:53PM -0700, Darrick J. Wong wrote: > On Thu, Mar 18, 2021 at 12:17:07PM -0400, Brian Foster wrote: > > The blocks used for allocation btrees (bnobt and countbt) are > > technically considered free space. This is because as free space is > > used, allocbt blocks are removed and naturally become available for > > traditional allocation. However, this means that a significant > > portion of free space may consist of in-use btree blocks if free > > space is severely fragmented. > > > > On large filesystems with large perag reservations, this can lead to > > a rare but nasty condition where a significant amount of physical > > free space is available, but the majority of actual usable blocks > > consist of in-use allocbt blocks. We have a record of a (~12TB, 32 > > AG) filesystem with multiple AGs in a state with ~2.5GB or so free > > blocks tracked across ~300 total allocbt blocks, but effectively at > > 100% full because the the free space is entirely consumed by > > refcountbt perag reservation. > > > > Such a large perag reservation is by design on large filesystems. > > The problem is that because the free space is so fragmented, this AG > > contributes the 300 or so allocbt blocks to the global counters as > > free space. If this pattern repeats across enough AGs, the > > filesystem lands in a state where global block reservation can > > outrun physical block availability. For example, a streaming > > buffered write on the affected filesystem continues to allow delayed > > allocation beyond the point where writeback starts to fail due to > > physical block allocation failures. The expected behavior is for the > > delalloc block reservation to fail gracefully with -ENOSPC before > > physical block allocation failure is a possibility. > > > > To address this problem, introduce an in-core counter to track the > > sum of all allocbt blocks in use by the filesystem. Use the new > > counter to set these blocks aside at reservation time and thus > > ensure they cannot be reserved until truly available. Since this is > > only necessary when perag reservations are active and the counter > > requires a read of each AGF to fully populate, only enforce on perag > > res enabled filesystems. This allows initialization of the counter > > at ->pagf_init time because the perag reservation init code reads > > each AGF at mount time. > > > > Signed-off-by: Brian Foster <bfoster@redhat.com> > > --- > > fs/xfs/libxfs/xfs_alloc.c | 12 ++++++++++++ > > fs/xfs/libxfs/xfs_alloc_btree.c | 2 ++ > > fs/xfs/xfs_mount.c | 18 +++++++++++++++++- > > fs/xfs/xfs_mount.h | 6 ++++++ > > 4 files changed, 37 insertions(+), 1 deletion(-) > > > > diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c > > index 0c623d3c1036..9fa378d2724e 100644 > > --- a/fs/xfs/libxfs/xfs_alloc.c > > +++ b/fs/xfs/libxfs/xfs_alloc.c > > @@ -3036,6 +3036,7 @@ xfs_alloc_read_agf( > > struct xfs_agf *agf; /* ag freelist header */ > > struct xfs_perag *pag; /* per allocation group data */ > > int error; > > + uint32_t allocbt_blks; > > > > trace_xfs_alloc_read_agf(mp, agno); > > > > @@ -3066,6 +3067,17 @@ xfs_alloc_read_agf( > > pag->pagf_refcount_level = be32_to_cpu(agf->agf_refcount_level); > > pag->pagf_init = 1; > > pag->pagf_agflreset = xfs_agfl_needs_reset(mp, agf); > > + > > + /* > > + * Update the global in-core allocbt block counter. Filter > > + * rmapbt blocks from the on-disk counter because those are > > + * managed by perag reservation. > > + */ > > + if (pag->pagf_btreeblks > be32_to_cpu(agf->agf_rmap_blocks)) { > > + allocbt_blks = pag->pagf_btreeblks - > > + be32_to_cpu(agf->agf_rmap_blocks); > > + atomic64_add(allocbt_blks, &mp->m_allocbt_blks); > > Does growfs call xfs_alloc_read_agf to set pagf_init in the perag > structure when it adds AGs to the filesystem? I don't /think/ that's > a problem for this use case (since allocbt_blks should be 0 on a freshly > initialized AG) but i was a little surprised to see block reservation > bits here. > I'm not sure it matters who reads AGFs as long as the global counter remains consistent. For growing an existing AG, it looks like we "free" the new space into the AG so I think that should be tracked accordingly like any other alloc/free. For a new AG, it looks like ->agf_btreeblks starts at 0 and then the perags would likely init through the perag res call that runs as the final step before the growfs returns. > The other thing is that online repair (what little of it is in the > kernel currently) can set pagf_init = 0; is that a problem? > Hmm.. if the AGF is corrupt, it seems likely that the in-core counter is busted as well. We could do something like account the delta from pre and post repair into the global counter, but I'd be weary of scenarios where the AGF might have become inconsistent with the counter somehow and the delta itself would throw it off. That might be unlikely, but what scares me about that is we could actually break the global counter by attempting to fix it so allocbt blocks become reservable again. I'm not sure there's a great answer here. Perhaps the safest thing would be to warn about an ->agf_btreeblks inconsistency that might result in some number of "unusable" blocks until a mount cycle occurs and resynchronizes the global counter..? That also seems to be consistent with how we handle the superblock counters after an agf repair. (All that said, I am somewhat hesitant to add to this unless we determine the approach is still viable after the expected perag res changes...). Brian > --D > > > + } > > } > > #ifdef DEBUG > > else if (!XFS_FORCED_SHUTDOWN(mp)) { > > diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c > > index 8e01231b308e..9f5a45f7baed 100644 > > --- a/fs/xfs/libxfs/xfs_alloc_btree.c > > +++ b/fs/xfs/libxfs/xfs_alloc_btree.c > > @@ -71,6 +71,7 @@ xfs_allocbt_alloc_block( > > return 0; > > } > > > > + atomic64_inc(&cur->bc_mp->m_allocbt_blks); > > xfs_extent_busy_reuse(cur->bc_mp, cur->bc_ag.agno, bno, 1, false); > > > > xfs_trans_agbtree_delta(cur->bc_tp, 1); > > @@ -95,6 +96,7 @@ xfs_allocbt_free_block( > > if (error) > > return error; > > > > + atomic64_dec(&cur->bc_mp->m_allocbt_blks); > > xfs_extent_busy_insert(cur->bc_tp, be32_to_cpu(agf->agf_seqno), bno, 1, > > XFS_EXTENT_BUSY_SKIP_DISCARD); > > xfs_trans_agbtree_delta(cur->bc_tp, -1); > > diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c > > index 1c97b155a8ee..29f97e909607 100644 > > --- a/fs/xfs/xfs_mount.c > > +++ b/fs/xfs/xfs_mount.c > > @@ -1176,6 +1176,7 @@ xfs_mod_fdblocks( > > int64_t lcounter; > > long long res_used; > > s32 batch; > > + uint64_t set_aside; > > > > if (delta > 0) { > > /* > > @@ -1215,8 +1216,23 @@ xfs_mod_fdblocks( > > else > > batch = XFS_FDBLOCKS_BATCH; > > > > + /* > > + * Set aside allocbt blocks because these blocks are tracked as free > > + * space but not available for allocation. Technically this means that a > > + * single reservation cannot consume all remaining free space, but the > > + * ratio of allocbt blocks to usable free blocks should be rather small. > > + * The tradeoff without this is that filesystems that maintain high > > + * perag block reservations can over reserve physical block availability > > + * and fail physical allocation, which leads to much more serious > > + * problems (i.e. transaction abort, pagecache discards, etc.) than > > + * slightly premature -ENOSPC. > > + */ > > + set_aside = mp->m_alloc_set_aside; > > + if (mp->m_has_agresv) > > + set_aside += atomic64_read(&mp->m_allocbt_blks); > > + > > percpu_counter_add_batch(&mp->m_fdblocks, delta, batch); > > - if (__percpu_counter_compare(&mp->m_fdblocks, mp->m_alloc_set_aside, > > + if (__percpu_counter_compare(&mp->m_fdblocks, set_aside, > > XFS_FDBLOCKS_BATCH) >= 0) { > > /* we had space! */ > > return 0; > > diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h > > index 489d9b2c53d9..041f437dc117 100644 > > --- a/fs/xfs/xfs_mount.h > > +++ b/fs/xfs/xfs_mount.h > > @@ -171,6 +171,12 @@ typedef struct xfs_mount { > > * extents or anything related to the rt device. > > */ > > struct percpu_counter m_delalloc_blks; > > + /* > > + * Global count of allocation btree blocks in use across all AGs. Only > > + * used when perag reservation is enabled. Helps prevent block > > + * reservation from attempting to reserve allocation btree blocks. > > + */ > > + atomic64_t m_allocbt_blks; > > > > struct radix_tree_root m_perag_tree; /* per-ag accounting info */ > > spinlock_t m_perag_lock; /* lock for m_perag_tree */ > > -- > > 2.26.2 > > > ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation 2021-03-19 15:00 ` Brian Foster @ 2021-03-27 1:34 ` Darrick J. Wong 2021-03-27 14:51 ` Brian Foster 0 siblings, 1 reply; 23+ messages in thread From: Darrick J. Wong @ 2021-03-27 1:34 UTC (permalink / raw) To: Brian Foster; +Cc: linux-xfs On Fri, Mar 19, 2021 at 11:00:29AM -0400, Brian Foster wrote: > On Thu, Mar 18, 2021 at 01:31:53PM -0700, Darrick J. Wong wrote: > > On Thu, Mar 18, 2021 at 12:17:07PM -0400, Brian Foster wrote: > > > The blocks used for allocation btrees (bnobt and countbt) are > > > technically considered free space. This is because as free space is > > > used, allocbt blocks are removed and naturally become available for > > > traditional allocation. However, this means that a significant > > > portion of free space may consist of in-use btree blocks if free > > > space is severely fragmented. > > > > > > On large filesystems with large perag reservations, this can lead to > > > a rare but nasty condition where a significant amount of physical > > > free space is available, but the majority of actual usable blocks > > > consist of in-use allocbt blocks. We have a record of a (~12TB, 32 > > > AG) filesystem with multiple AGs in a state with ~2.5GB or so free > > > blocks tracked across ~300 total allocbt blocks, but effectively at > > > 100% full because the the free space is entirely consumed by > > > refcountbt perag reservation. > > > > > > Such a large perag reservation is by design on large filesystems. > > > The problem is that because the free space is so fragmented, this AG > > > contributes the 300 or so allocbt blocks to the global counters as > > > free space. If this pattern repeats across enough AGs, the > > > filesystem lands in a state where global block reservation can > > > outrun physical block availability. For example, a streaming > > > buffered write on the affected filesystem continues to allow delayed > > > allocation beyond the point where writeback starts to fail due to > > > physical block allocation failures. The expected behavior is for the > > > delalloc block reservation to fail gracefully with -ENOSPC before > > > physical block allocation failure is a possibility. > > > > > > To address this problem, introduce an in-core counter to track the > > > sum of all allocbt blocks in use by the filesystem. Use the new > > > counter to set these blocks aside at reservation time and thus > > > ensure they cannot be reserved until truly available. Since this is > > > only necessary when perag reservations are active and the counter > > > requires a read of each AGF to fully populate, only enforce on perag > > > res enabled filesystems. This allows initialization of the counter > > > at ->pagf_init time because the perag reservation init code reads > > > each AGF at mount time. > > > > > > Signed-off-by: Brian Foster <bfoster@redhat.com> > > > --- > > > fs/xfs/libxfs/xfs_alloc.c | 12 ++++++++++++ > > > fs/xfs/libxfs/xfs_alloc_btree.c | 2 ++ > > > fs/xfs/xfs_mount.c | 18 +++++++++++++++++- > > > fs/xfs/xfs_mount.h | 6 ++++++ > > > 4 files changed, 37 insertions(+), 1 deletion(-) > > > > > > diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c > > > index 0c623d3c1036..9fa378d2724e 100644 > > > --- a/fs/xfs/libxfs/xfs_alloc.c > > > +++ b/fs/xfs/libxfs/xfs_alloc.c > > > @@ -3036,6 +3036,7 @@ xfs_alloc_read_agf( > > > struct xfs_agf *agf; /* ag freelist header */ > > > struct xfs_perag *pag; /* per allocation group data */ > > > int error; > > > + uint32_t allocbt_blks; > > > > > > trace_xfs_alloc_read_agf(mp, agno); > > > > > > @@ -3066,6 +3067,17 @@ xfs_alloc_read_agf( > > > pag->pagf_refcount_level = be32_to_cpu(agf->agf_refcount_level); > > > pag->pagf_init = 1; > > > pag->pagf_agflreset = xfs_agfl_needs_reset(mp, agf); > > > + > > > + /* > > > + * Update the global in-core allocbt block counter. Filter > > > + * rmapbt blocks from the on-disk counter because those are > > > + * managed by perag reservation. > > > + */ > > > + if (pag->pagf_btreeblks > be32_to_cpu(agf->agf_rmap_blocks)) { > > > + allocbt_blks = pag->pagf_btreeblks - > > > + be32_to_cpu(agf->agf_rmap_blocks); > > > + atomic64_add(allocbt_blks, &mp->m_allocbt_blks); > > > > Does growfs call xfs_alloc_read_agf to set pagf_init in the perag > > structure when it adds AGs to the filesystem? I don't /think/ that's > > a problem for this use case (since allocbt_blks should be 0 on a freshly > > initialized AG) but i was a little surprised to see block reservation > > bits here. > > > > I'm not sure it matters who reads AGFs as long as the global counter > remains consistent. For growing an existing AG, it looks like we "free" > the new space into the AG so I think that should be tracked accordingly > like any other alloc/free. For a new AG, it looks like ->agf_btreeblks > starts at 0 and then the perags would likely init through the perag res > call that runs as the final step before the growfs returns. <nod> > > The other thing is that online repair (what little of it is in the > > kernel currently) can set pagf_init = 0; is that a problem? > > > > Hmm.. if the AGF is corrupt, it seems likely that the in-core counter is > busted as well. We could do something like account the delta from pre > and post repair into the global counter, but I'd be weary of scenarios > where the AGF might have become inconsistent with the counter somehow How would the allocbt_blks counter become inconsistent with the AGF? We update that incore counter at the same time that we update all the other ondisk and pag counters, so unless the fs is shut down, we know that m_allocbt_blks is off by the same amount as the inconsistent AGF. > and the delta itself would throw it off. That might be unlikely, but > what scares me about that is we could actually break the global counter > by attempting to fix it so allocbt blocks become reservable again. But if you don't even /try/ to fix the counter during an AGF repair, that almost guarantees that the decisions based on the counter will not be correct... > I'm not sure there's a great answer here. Perhaps the safest thing would > be to warn about an ->agf_btreeblks inconsistency that might result in > some number of "unusable" blocks until a mount cycle occurs and > resynchronizes the global counter..? That also seems to be consistent > with how we handle the superblock counters after an agf repair. ...but OTOH I guess the worst that happens is that we'll ENOSPC early? --D > (All that said, I am somewhat hesitant to add to this unless we > determine the approach is still viable after the expected perag res > changes...). > > Brian > > > --D > > > > > + } > > > } > > > #ifdef DEBUG > > > else if (!XFS_FORCED_SHUTDOWN(mp)) { > > > diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c > > > index 8e01231b308e..9f5a45f7baed 100644 > > > --- a/fs/xfs/libxfs/xfs_alloc_btree.c > > > +++ b/fs/xfs/libxfs/xfs_alloc_btree.c > > > @@ -71,6 +71,7 @@ xfs_allocbt_alloc_block( > > > return 0; > > > } > > > > > > + atomic64_inc(&cur->bc_mp->m_allocbt_blks); > > > xfs_extent_busy_reuse(cur->bc_mp, cur->bc_ag.agno, bno, 1, false); > > > > > > xfs_trans_agbtree_delta(cur->bc_tp, 1); > > > @@ -95,6 +96,7 @@ xfs_allocbt_free_block( > > > if (error) > > > return error; > > > > > > + atomic64_dec(&cur->bc_mp->m_allocbt_blks); > > > xfs_extent_busy_insert(cur->bc_tp, be32_to_cpu(agf->agf_seqno), bno, 1, > > > XFS_EXTENT_BUSY_SKIP_DISCARD); > > > xfs_trans_agbtree_delta(cur->bc_tp, -1); > > > diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c > > > index 1c97b155a8ee..29f97e909607 100644 > > > --- a/fs/xfs/xfs_mount.c > > > +++ b/fs/xfs/xfs_mount.c > > > @@ -1176,6 +1176,7 @@ xfs_mod_fdblocks( > > > int64_t lcounter; > > > long long res_used; > > > s32 batch; > > > + uint64_t set_aside; > > > > > > if (delta > 0) { > > > /* > > > @@ -1215,8 +1216,23 @@ xfs_mod_fdblocks( > > > else > > > batch = XFS_FDBLOCKS_BATCH; > > > > > > + /* > > > + * Set aside allocbt blocks because these blocks are tracked as free > > > + * space but not available for allocation. Technically this means that a > > > + * single reservation cannot consume all remaining free space, but the > > > + * ratio of allocbt blocks to usable free blocks should be rather small. > > > + * The tradeoff without this is that filesystems that maintain high > > > + * perag block reservations can over reserve physical block availability > > > + * and fail physical allocation, which leads to much more serious > > > + * problems (i.e. transaction abort, pagecache discards, etc.) than > > > + * slightly premature -ENOSPC. > > > + */ > > > + set_aside = mp->m_alloc_set_aside; > > > + if (mp->m_has_agresv) > > > + set_aside += atomic64_read(&mp->m_allocbt_blks); > > > + > > > percpu_counter_add_batch(&mp->m_fdblocks, delta, batch); > > > - if (__percpu_counter_compare(&mp->m_fdblocks, mp->m_alloc_set_aside, > > > + if (__percpu_counter_compare(&mp->m_fdblocks, set_aside, > > > XFS_FDBLOCKS_BATCH) >= 0) { > > > /* we had space! */ > > > return 0; > > > diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h > > > index 489d9b2c53d9..041f437dc117 100644 > > > --- a/fs/xfs/xfs_mount.h > > > +++ b/fs/xfs/xfs_mount.h > > > @@ -171,6 +171,12 @@ typedef struct xfs_mount { > > > * extents or anything related to the rt device. > > > */ > > > struct percpu_counter m_delalloc_blks; > > > + /* > > > + * Global count of allocation btree blocks in use across all AGs. Only > > > + * used when perag reservation is enabled. Helps prevent block > > > + * reservation from attempting to reserve allocation btree blocks. > > > + */ > > > + atomic64_t m_allocbt_blks; > > > > > > struct radix_tree_root m_perag_tree; /* per-ag accounting info */ > > > spinlock_t m_perag_lock; /* lock for m_perag_tree */ > > > -- > > > 2.26.2 > > > > > > ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation 2021-03-27 1:34 ` Darrick J. Wong @ 2021-03-27 14:51 ` Brian Foster 0 siblings, 0 replies; 23+ messages in thread From: Brian Foster @ 2021-03-27 14:51 UTC (permalink / raw) To: Darrick J. Wong; +Cc: linux-xfs On Fri, Mar 26, 2021 at 06:34:22PM -0700, Darrick J. Wong wrote: > On Fri, Mar 19, 2021 at 11:00:29AM -0400, Brian Foster wrote: > > On Thu, Mar 18, 2021 at 01:31:53PM -0700, Darrick J. Wong wrote: > > > On Thu, Mar 18, 2021 at 12:17:07PM -0400, Brian Foster wrote: > > > > The blocks used for allocation btrees (bnobt and countbt) are > > > > technically considered free space. This is because as free space is > > > > used, allocbt blocks are removed and naturally become available for > > > > traditional allocation. However, this means that a significant > > > > portion of free space may consist of in-use btree blocks if free > > > > space is severely fragmented. > > > > > > > > On large filesystems with large perag reservations, this can lead to > > > > a rare but nasty condition where a significant amount of physical > > > > free space is available, but the majority of actual usable blocks > > > > consist of in-use allocbt blocks. We have a record of a (~12TB, 32 > > > > AG) filesystem with multiple AGs in a state with ~2.5GB or so free > > > > blocks tracked across ~300 total allocbt blocks, but effectively at > > > > 100% full because the the free space is entirely consumed by > > > > refcountbt perag reservation. > > > > > > > > Such a large perag reservation is by design on large filesystems. > > > > The problem is that because the free space is so fragmented, this AG > > > > contributes the 300 or so allocbt blocks to the global counters as > > > > free space. If this pattern repeats across enough AGs, the > > > > filesystem lands in a state where global block reservation can > > > > outrun physical block availability. For example, a streaming > > > > buffered write on the affected filesystem continues to allow delayed > > > > allocation beyond the point where writeback starts to fail due to > > > > physical block allocation failures. The expected behavior is for the > > > > delalloc block reservation to fail gracefully with -ENOSPC before > > > > physical block allocation failure is a possibility. > > > > > > > > To address this problem, introduce an in-core counter to track the > > > > sum of all allocbt blocks in use by the filesystem. Use the new > > > > counter to set these blocks aside at reservation time and thus > > > > ensure they cannot be reserved until truly available. Since this is > > > > only necessary when perag reservations are active and the counter > > > > requires a read of each AGF to fully populate, only enforce on perag > > > > res enabled filesystems. This allows initialization of the counter > > > > at ->pagf_init time because the perag reservation init code reads > > > > each AGF at mount time. > > > > > > > > Signed-off-by: Brian Foster <bfoster@redhat.com> > > > > --- > > > > fs/xfs/libxfs/xfs_alloc.c | 12 ++++++++++++ > > > > fs/xfs/libxfs/xfs_alloc_btree.c | 2 ++ > > > > fs/xfs/xfs_mount.c | 18 +++++++++++++++++- > > > > fs/xfs/xfs_mount.h | 6 ++++++ > > > > 4 files changed, 37 insertions(+), 1 deletion(-) > > > > > > > > diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c > > > > index 0c623d3c1036..9fa378d2724e 100644 > > > > --- a/fs/xfs/libxfs/xfs_alloc.c > > > > +++ b/fs/xfs/libxfs/xfs_alloc.c > > > > @@ -3036,6 +3036,7 @@ xfs_alloc_read_agf( > > > > struct xfs_agf *agf; /* ag freelist header */ > > > > struct xfs_perag *pag; /* per allocation group data */ > > > > int error; > > > > + uint32_t allocbt_blks; > > > > > > > > trace_xfs_alloc_read_agf(mp, agno); > > > > > > > > @@ -3066,6 +3067,17 @@ xfs_alloc_read_agf( > > > > pag->pagf_refcount_level = be32_to_cpu(agf->agf_refcount_level); > > > > pag->pagf_init = 1; > > > > pag->pagf_agflreset = xfs_agfl_needs_reset(mp, agf); > > > > + > > > > + /* > > > > + * Update the global in-core allocbt block counter. Filter > > > > + * rmapbt blocks from the on-disk counter because those are > > > > + * managed by perag reservation. > > > > + */ > > > > + if (pag->pagf_btreeblks > be32_to_cpu(agf->agf_rmap_blocks)) { > > > > + allocbt_blks = pag->pagf_btreeblks - > > > > + be32_to_cpu(agf->agf_rmap_blocks); > > > > + atomic64_add(allocbt_blks, &mp->m_allocbt_blks); > > > > > > Does growfs call xfs_alloc_read_agf to set pagf_init in the perag > > > structure when it adds AGs to the filesystem? I don't /think/ that's > > > a problem for this use case (since allocbt_blks should be 0 on a freshly > > > initialized AG) but i was a little surprised to see block reservation > > > bits here. > > > > > > > I'm not sure it matters who reads AGFs as long as the global counter > > remains consistent. For growing an existing AG, it looks like we "free" > > the new space into the AG so I think that should be tracked accordingly > > like any other alloc/free. For a new AG, it looks like ->agf_btreeblks > > starts at 0 and then the perags would likely init through the perag res > > call that runs as the final step before the growfs returns. > > <nod> > > > > The other thing is that online repair (what little of it is in the > > > kernel currently) can set pagf_init = 0; is that a problem? > > > > > > > Hmm.. if the AGF is corrupt, it seems likely that the in-core counter is > > busted as well. We could do something like account the delta from pre > > and post repair into the global counter, but I'd be weary of scenarios > > where the AGF might have become inconsistent with the counter somehow > > How would the allocbt_blks counter become inconsistent with the AGF? We > update that incore counter at the same time that we update all the other > ondisk and pag counters, so unless the fs is shut down, we know that > m_allocbt_blks is off by the same amount as the inconsistent AGF. > Dunno, bit flip or something? Bug? I'm not going to try and predict how things might break. > > and the delta itself would throw it off. That might be unlikely, but > > what scares me about that is we could actually break the global counter > > by attempting to fix it so allocbt blocks become reservable again. > > But if you don't even /try/ to fix the counter during an AGF repair, > that almost guarantees that the decisions based on the counter will not > be correct... > It's not clear to me how to fix the counter after a single AGF repair such that it's more likely to be correct than not. How do we currently deal with ->m_fdblocks after a repair of a corrupted AGF? It looks to me that we mark the summary counters sick and expect them to reinit on the next mount based on the perag values, but I could be missing something deeper in the scrub code. Is there a more explicit reinit or something somewhere? > > I'm not sure there's a great answer here. Perhaps the safest thing would > > be to warn about an ->agf_btreeblks inconsistency that might result in > > some number of "unusable" blocks until a mount cycle occurs and > > resynchronizes the global counter..? That also seems to be consistent > > with how we handle the superblock counters after an agf repair. > > ...but OTOH I guess the worst that happens is that we'll ENOSPC early? > Yeah. I think the characteristics of an inconsistent allocbt counter are either that the set aside is ineffective (i.e. counter too low) or premature -ENOSPC (counter too high). Brian > --D > > > (All that said, I am somewhat hesitant to add to this unless we > > determine the approach is still viable after the expected perag res > > changes...). > > > > Brian > > > > > --D > > > > > > > + } > > > > } > > > > #ifdef DEBUG > > > > else if (!XFS_FORCED_SHUTDOWN(mp)) { > > > > diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c > > > > index 8e01231b308e..9f5a45f7baed 100644 > > > > --- a/fs/xfs/libxfs/xfs_alloc_btree.c > > > > +++ b/fs/xfs/libxfs/xfs_alloc_btree.c > > > > @@ -71,6 +71,7 @@ xfs_allocbt_alloc_block( > > > > return 0; > > > > } > > > > > > > > + atomic64_inc(&cur->bc_mp->m_allocbt_blks); > > > > xfs_extent_busy_reuse(cur->bc_mp, cur->bc_ag.agno, bno, 1, false); > > > > > > > > xfs_trans_agbtree_delta(cur->bc_tp, 1); > > > > @@ -95,6 +96,7 @@ xfs_allocbt_free_block( > > > > if (error) > > > > return error; > > > > > > > > + atomic64_dec(&cur->bc_mp->m_allocbt_blks); > > > > xfs_extent_busy_insert(cur->bc_tp, be32_to_cpu(agf->agf_seqno), bno, 1, > > > > XFS_EXTENT_BUSY_SKIP_DISCARD); > > > > xfs_trans_agbtree_delta(cur->bc_tp, -1); > > > > diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c > > > > index 1c97b155a8ee..29f97e909607 100644 > > > > --- a/fs/xfs/xfs_mount.c > > > > +++ b/fs/xfs/xfs_mount.c > > > > @@ -1176,6 +1176,7 @@ xfs_mod_fdblocks( > > > > int64_t lcounter; > > > > long long res_used; > > > > s32 batch; > > > > + uint64_t set_aside; > > > > > > > > if (delta > 0) { > > > > /* > > > > @@ -1215,8 +1216,23 @@ xfs_mod_fdblocks( > > > > else > > > > batch = XFS_FDBLOCKS_BATCH; > > > > > > > > + /* > > > > + * Set aside allocbt blocks because these blocks are tracked as free > > > > + * space but not available for allocation. Technically this means that a > > > > + * single reservation cannot consume all remaining free space, but the > > > > + * ratio of allocbt blocks to usable free blocks should be rather small. > > > > + * The tradeoff without this is that filesystems that maintain high > > > > + * perag block reservations can over reserve physical block availability > > > > + * and fail physical allocation, which leads to much more serious > > > > + * problems (i.e. transaction abort, pagecache discards, etc.) than > > > > + * slightly premature -ENOSPC. > > > > + */ > > > > + set_aside = mp->m_alloc_set_aside; > > > > + if (mp->m_has_agresv) > > > > + set_aside += atomic64_read(&mp->m_allocbt_blks); > > > > + > > > > percpu_counter_add_batch(&mp->m_fdblocks, delta, batch); > > > > - if (__percpu_counter_compare(&mp->m_fdblocks, mp->m_alloc_set_aside, > > > > + if (__percpu_counter_compare(&mp->m_fdblocks, set_aside, > > > > XFS_FDBLOCKS_BATCH) >= 0) { > > > > /* we had space! */ > > > > return 0; > > > > diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h > > > > index 489d9b2c53d9..041f437dc117 100644 > > > > --- a/fs/xfs/xfs_mount.h > > > > +++ b/fs/xfs/xfs_mount.h > > > > @@ -171,6 +171,12 @@ typedef struct xfs_mount { > > > > * extents or anything related to the rt device. > > > > */ > > > > struct percpu_counter m_delalloc_blks; > > > > + /* > > > > + * Global count of allocation btree blocks in use across all AGs. Only > > > > + * used when perag reservation is enabled. Helps prevent block > > > > + * reservation from attempting to reserve allocation btree blocks. > > > > + */ > > > > + atomic64_t m_allocbt_blks; > > > > > > > > struct radix_tree_root m_perag_tree; /* per-ag accounting info */ > > > > spinlock_t m_perag_lock; /* lock for m_perag_tree */ > > > > -- > > > > 2.26.2 > > > > > > > > > > ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 0/2] xfs: set aside allocation btree blocks from block reservation 2021-03-18 16:17 [PATCH v3 0/2] xfs: set aside allocation btree blocks from block reservation Brian Foster 2021-03-18 16:17 ` [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active Brian Foster 2021-03-18 16:17 ` [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation Brian Foster @ 2021-04-09 14:17 ` Brian Foster 2 siblings, 0 replies; 23+ messages in thread From: Brian Foster @ 2021-04-09 14:17 UTC (permalink / raw) To: linux-xfs On Thu, Mar 18, 2021 at 12:17:05PM -0400, Brian Foster wrote: > Hi all, > > This is v3 of the allocbt block set aside fixup. The primary change in > v3 is to filter out rmapbt blocks from the usage accounting. rmapbt > blocks live in free space similar to allocbt blocks, but are managed > appropriately via perag reservation and so should not be set aside from > reservation requests. > > Brian > > v3: > - Use a mount flag for easy detection of active perag reservation. > - Filter rmapbt blocks from allocbt block accounting. > v2: https://lore.kernel.org/linux-xfs/20210222152108.896178-1-bfoster@redhat.com/ > - Use an atomic counter instead of a percpu counter. > v1: https://lore.kernel.org/linux-xfs/20210217132339.651020-1-bfoster@redhat.com/ > Ping on this series..? AFAICT there is no outstanding feedback.. Brian > Brian Foster (2): > xfs: set a mount flag when perag reservation is active > xfs: set aside allocation btree blocks from block reservation > > fs/xfs/libxfs/xfs_ag_resv.c | 24 ++++++++++++++---------- > fs/xfs/libxfs/xfs_alloc.c | 12 ++++++++++++ > fs/xfs/libxfs/xfs_alloc_btree.c | 2 ++ > fs/xfs/xfs_mount.c | 18 +++++++++++++++++- > fs/xfs/xfs_mount.h | 7 +++++++ > 5 files changed, 52 insertions(+), 11 deletions(-) > > -- > 2.26.2 > ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v3 REPOST 0/2] xfs: set aside allocation btree blocks from block reservation @ 2021-04-12 13:30 Brian Foster 2021-04-12 13:30 ` [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active Brian Foster 0 siblings, 1 reply; 23+ messages in thread From: Brian Foster @ 2021-04-12 13:30 UTC (permalink / raw) To: linux-xfs Hi all, There's been a decent amount of discussion on v3 of the series but AFAIA, nothing that has materialized into changes to these two patches. This is just a repost of v3 to bump the series. Brian v3: https://lore.kernel.org/linux-xfs/20210318161707.723742-1-bfoster@redhat.com/ - Use a mount flag for easy detection of active perag reservation. - Filter rmapbt blocks from allocbt block accounting. v2: https://lore.kernel.org/linux-xfs/20210222152108.896178-1-bfoster@redhat.com/ - Use an atomic counter instead of a percpu counter. v1: https://lore.kernel.org/linux-xfs/20210217132339.651020-1-bfoster@redhat.com/ Brian Foster (2): xfs: set a mount flag when perag reservation is active xfs: set aside allocation btree blocks from block reservation fs/xfs/libxfs/xfs_ag_resv.c | 24 ++++++++++++++---------- fs/xfs/libxfs/xfs_alloc.c | 12 ++++++++++++ fs/xfs/libxfs/xfs_alloc_btree.c | 2 ++ fs/xfs/xfs_mount.c | 18 +++++++++++++++++- fs/xfs/xfs_mount.h | 7 +++++++ 5 files changed, 52 insertions(+), 11 deletions(-) -- 2.26.3 ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active 2021-04-12 13:30 [PATCH v3 REPOST " Brian Foster @ 2021-04-12 13:30 ` Brian Foster 2021-04-14 0:49 ` Darrick J. Wong 0 siblings, 1 reply; 23+ messages in thread From: Brian Foster @ 2021-04-12 13:30 UTC (permalink / raw) To: linux-xfs perag reservation is enabled at mount time on a per AG basis. The upcoming in-core allocation btree accounting mechanism needs to know when reservation is enabled and that all perag AGF contexts are initialized. As a preparation step, set a flag in the mount structure and unconditionally initialize the pagf on all mounts where at least one reservation is active. Signed-off-by: Brian Foster <bfoster@redhat.com> --- fs/xfs/libxfs/xfs_ag_resv.c | 24 ++++++++++++++---------- fs/xfs/xfs_mount.h | 1 + 2 files changed, 15 insertions(+), 10 deletions(-) diff --git a/fs/xfs/libxfs/xfs_ag_resv.c b/fs/xfs/libxfs/xfs_ag_resv.c index 6c5f8d10589c..9b2fc4abad2c 100644 --- a/fs/xfs/libxfs/xfs_ag_resv.c +++ b/fs/xfs/libxfs/xfs_ag_resv.c @@ -254,6 +254,7 @@ xfs_ag_resv_init( xfs_extlen_t ask; xfs_extlen_t used; int error = 0; + bool has_resv = false; /* Create the metadata reservation. */ if (pag->pag_meta_resv.ar_asked == 0) { @@ -291,6 +292,8 @@ xfs_ag_resv_init( if (error) goto out; } + if (ask) + has_resv = true; } /* Create the RMAPBT metadata reservation */ @@ -304,18 +307,19 @@ xfs_ag_resv_init( error = __xfs_ag_resv_init(pag, XFS_AG_RESV_RMAPBT, ask, used); if (error) goto out; + if (ask) + has_resv = true; } -#ifdef DEBUG - /* need to read in the AGF for the ASSERT below to work */ - error = xfs_alloc_pagf_init(pag->pag_mount, tp, pag->pag_agno, 0); - if (error) - return error; - - ASSERT(xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_reserved + - xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_reserved <= - pag->pagf_freeblks + pag->pagf_flcount); -#endif + if (has_resv) { + mp->m_has_agresv = true; + error = xfs_alloc_pagf_init(mp, tp, pag->pag_agno, 0); + if (error) + return error; + ASSERT(xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_reserved + + xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_reserved <= + pag->pagf_freeblks + pag->pagf_flcount); + } out: return error; } diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h index 81829d19596e..8847ffd29777 100644 --- a/fs/xfs/xfs_mount.h +++ b/fs/xfs/xfs_mount.h @@ -139,6 +139,7 @@ typedef struct xfs_mount { bool m_fail_unmount; bool m_finobt_nores; /* no per-AG finobt resv. */ bool m_update_sb; /* sb needs update in mount */ + bool m_has_agresv; /* perag reservations active */ /* * Bitsets of per-fs metadata that have been checked and/or are sick. -- 2.26.3 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active 2021-04-12 13:30 ` [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active Brian Foster @ 2021-04-14 0:49 ` Darrick J. Wong 2021-04-20 16:22 ` Brian Foster 2021-04-20 16:23 ` Brian Foster 0 siblings, 2 replies; 23+ messages in thread From: Darrick J. Wong @ 2021-04-14 0:49 UTC (permalink / raw) To: Brian Foster; +Cc: linux-xfs On Mon, Apr 12, 2021 at 09:30:58AM -0400, Brian Foster wrote: > perag reservation is enabled at mount time on a per AG basis. The > upcoming in-core allocation btree accounting mechanism needs to know > when reservation is enabled and that all perag AGF contexts are > initialized. As a preparation step, set a flag in the mount > structure and unconditionally initialize the pagf on all mounts > where at least one reservation is active. > > Signed-off-by: Brian Foster <bfoster@redhat.com> > --- > fs/xfs/libxfs/xfs_ag_resv.c | 24 ++++++++++++++---------- > fs/xfs/xfs_mount.h | 1 + > 2 files changed, 15 insertions(+), 10 deletions(-) > > diff --git a/fs/xfs/libxfs/xfs_ag_resv.c b/fs/xfs/libxfs/xfs_ag_resv.c > index 6c5f8d10589c..9b2fc4abad2c 100644 > --- a/fs/xfs/libxfs/xfs_ag_resv.c > +++ b/fs/xfs/libxfs/xfs_ag_resv.c > @@ -254,6 +254,7 @@ xfs_ag_resv_init( > xfs_extlen_t ask; > xfs_extlen_t used; > int error = 0; > + bool has_resv = false; > > /* Create the metadata reservation. */ > if (pag->pag_meta_resv.ar_asked == 0) { > @@ -291,6 +292,8 @@ xfs_ag_resv_init( > if (error) > goto out; > } > + if (ask) > + has_resv = true; > } > > /* Create the RMAPBT metadata reservation */ > @@ -304,18 +307,19 @@ xfs_ag_resv_init( > error = __xfs_ag_resv_init(pag, XFS_AG_RESV_RMAPBT, ask, used); > if (error) > goto out; > + if (ask) > + has_resv = true; > } > > -#ifdef DEBUG > - /* need to read in the AGF for the ASSERT below to work */ > - error = xfs_alloc_pagf_init(pag->pag_mount, tp, pag->pag_agno, 0); > - if (error) > - return error; > - > - ASSERT(xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_reserved + > - xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_reserved <= > - pag->pagf_freeblks + pag->pagf_flcount); > -#endif > + if (has_resv) { > + mp->m_has_agresv = true; If the metadata reservation succeeds but the rmapbt reservation fails with ENOSPC, won't we fail to set m_has_agresv true here? We don't fail the entire mount if ENOSPC happens, which means that there's a slight chance of doing the wrong thing here if all the AGs are (somehow) like that. --D > + error = xfs_alloc_pagf_init(mp, tp, pag->pag_agno, 0); > + if (error) > + return error; > + ASSERT(xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_reserved + > + xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_reserved <= > + pag->pagf_freeblks + pag->pagf_flcount); > + } > out: > return error; > } > diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h > index 81829d19596e..8847ffd29777 100644 > --- a/fs/xfs/xfs_mount.h > +++ b/fs/xfs/xfs_mount.h > @@ -139,6 +139,7 @@ typedef struct xfs_mount { > bool m_fail_unmount; > bool m_finobt_nores; /* no per-AG finobt resv. */ > bool m_update_sb; /* sb needs update in mount */ > + bool m_has_agresv; /* perag reservations active */ > > /* > * Bitsets of per-fs metadata that have been checked and/or are sick. > -- > 2.26.3 > ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active 2021-04-14 0:49 ` Darrick J. Wong @ 2021-04-20 16:22 ` Brian Foster 2021-04-20 16:23 ` Brian Foster 1 sibling, 0 replies; 23+ messages in thread From: Brian Foster @ 2021-04-20 16:22 UTC (permalink / raw) To: Darrick J. Wong; +Cc: linux-xfs On Tue, Apr 13, 2021 at 05:49:50PM -0700, Darrick J. Wong wrote: > On Mon, Apr 12, 2021 at 09:30:58AM -0400, Brian Foster wrote: > > perag reservation is enabled at mount time on a per AG basis. The > > upcoming in-core allocation btree accounting mechanism needs to know > > when reservation is enabled and that all perag AGF contexts are > > initialized. As a preparation step, set a flag in the mount > > structure and unconditionally initialize the pagf on all mounts > > where at least one reservation is active. > > > > Signed-off-by: Brian Foster <bfoster@redhat.com> > > --- > > fs/xfs/libxfs/xfs_ag_resv.c | 24 ++++++++++++++---------- > > fs/xfs/xfs_mount.h | 1 + > > 2 files changed, 15 insertions(+), 10 deletions(-) > > > > diff --git a/fs/xfs/libxfs/xfs_ag_resv.c b/fs/xfs/libxfs/xfs_ag_resv.c > > index 6c5f8d10589c..9b2fc4abad2c 100644 > > --- a/fs/xfs/libxfs/xfs_ag_resv.c > > +++ b/fs/xfs/libxfs/xfs_ag_resv.c > > @@ -254,6 +254,7 @@ xfs_ag_resv_init( > > xfs_extlen_t ask; > > xfs_extlen_t used; > > int error = 0; > > + bool has_resv = false; > > > > /* Create the metadata reservation. */ > > if (pag->pag_meta_resv.ar_asked == 0) { > > @@ -291,6 +292,8 @@ xfs_ag_resv_init( > > if (error) > > goto out; > > } > > + if (ask) > > + has_resv = true; > > } > > > > /* Create the RMAPBT metadata reservation */ > > @@ -304,18 +307,19 @@ xfs_ag_resv_init( > > error = __xfs_ag_resv_init(pag, XFS_AG_RESV_RMAPBT, ask, used); > > if (error) > > goto out; > > + if (ask) > > + has_resv = true; > > } > > > > -#ifdef DEBUG > > - /* need to read in the AGF for the ASSERT below to work */ > > - error = xfs_alloc_pagf_init(pag->pag_mount, tp, pag->pag_agno, 0); > > - if (error) > > - return error; > > - > > - ASSERT(xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_reserved + > > - xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_reserved <= > > - pag->pagf_freeblks + pag->pagf_flcount); > > -#endif > > + if (has_resv) { > > + mp->m_has_agresv = true; > > If the metadata reservation succeeds but the rmapbt reservation fails > with ENOSPC, won't we fail to set m_has_agresv true here? We don't fail > the entire mount if ENOSPC happens, which means that there's a slight > chance of doing the wrong thing here if all the AGs are (somehow) like > that. > Yes it looks like we would skip setting the mount flag (and initializing the pagf). I suppose we should probably just lift the out label up before this whole branch. Brian > --D > > > + error = xfs_alloc_pagf_init(mp, tp, pag->pag_agno, 0); > > + if (error) > > + return error; > > + ASSERT(xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_reserved + > > + xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_reserved <= > > + pag->pagf_freeblks + pag->pagf_flcount); > > + } > > out: > > return error; > > } > > diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h > > index 81829d19596e..8847ffd29777 100644 > > --- a/fs/xfs/xfs_mount.h > > +++ b/fs/xfs/xfs_mount.h > > @@ -139,6 +139,7 @@ typedef struct xfs_mount { > > bool m_fail_unmount; > > bool m_finobt_nores; /* no per-AG finobt resv. */ > > bool m_update_sb; /* sb needs update in mount */ > > + bool m_has_agresv; /* perag reservations active */ > > > > /* > > * Bitsets of per-fs metadata that have been checked and/or are sick. > > -- > > 2.26.3 > > > ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active 2021-04-14 0:49 ` Darrick J. Wong 2021-04-20 16:22 ` Brian Foster @ 2021-04-20 16:23 ` Brian Foster 1 sibling, 0 replies; 23+ messages in thread From: Brian Foster @ 2021-04-20 16:23 UTC (permalink / raw) To: Darrick J. Wong; +Cc: linux-xfs On Tue, Apr 13, 2021 at 05:49:50PM -0700, Darrick J. Wong wrote: > On Mon, Apr 12, 2021 at 09:30:58AM -0400, Brian Foster wrote: > > perag reservation is enabled at mount time on a per AG basis. The > > upcoming in-core allocation btree accounting mechanism needs to know > > when reservation is enabled and that all perag AGF contexts are > > initialized. As a preparation step, set a flag in the mount > > structure and unconditionally initialize the pagf on all mounts > > where at least one reservation is active. > > > > Signed-off-by: Brian Foster <bfoster@redhat.com> > > --- > > fs/xfs/libxfs/xfs_ag_resv.c | 24 ++++++++++++++---------- > > fs/xfs/xfs_mount.h | 1 + > > 2 files changed, 15 insertions(+), 10 deletions(-) > > > > diff --git a/fs/xfs/libxfs/xfs_ag_resv.c b/fs/xfs/libxfs/xfs_ag_resv.c > > index 6c5f8d10589c..9b2fc4abad2c 100644 > > --- a/fs/xfs/libxfs/xfs_ag_resv.c > > +++ b/fs/xfs/libxfs/xfs_ag_resv.c > > @@ -254,6 +254,7 @@ xfs_ag_resv_init( > > xfs_extlen_t ask; > > xfs_extlen_t used; > > int error = 0; > > + bool has_resv = false; > > > > /* Create the metadata reservation. */ > > if (pag->pag_meta_resv.ar_asked == 0) { > > @@ -291,6 +292,8 @@ xfs_ag_resv_init( > > if (error) > > goto out; > > } > > + if (ask) > > + has_resv = true; > > } > > > > /* Create the RMAPBT metadata reservation */ > > @@ -304,18 +307,19 @@ xfs_ag_resv_init( > > error = __xfs_ag_resv_init(pag, XFS_AG_RESV_RMAPBT, ask, used); > > if (error) > > goto out; > > + if (ask) > > + has_resv = true; > > } > > > > -#ifdef DEBUG > > - /* need to read in the AGF for the ASSERT below to work */ > > - error = xfs_alloc_pagf_init(pag->pag_mount, tp, pag->pag_agno, 0); > > - if (error) > > - return error; > > - > > - ASSERT(xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_reserved + > > - xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_reserved <= > > - pag->pagf_freeblks + pag->pagf_flcount); > > -#endif > > + if (has_resv) { > > + mp->m_has_agresv = true; > > If the metadata reservation succeeds but the rmapbt reservation fails > with ENOSPC, won't we fail to set m_has_agresv true here? We don't fail > the entire mount if ENOSPC happens, which means that there's a slight > chance of doing the wrong thing here if all the AGs are (somehow) like > that. > Yes it looks like we would skip setting the mount flag (and initializing the pagf). I suppose we should probably just lift the out label up before this whole branch. Brian > --D > > > + error = xfs_alloc_pagf_init(mp, tp, pag->pag_agno, 0); > > + if (error) > > + return error; > > + ASSERT(xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_reserved + > > + xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_reserved <= > > + pag->pagf_freeblks + pag->pagf_flcount); > > + } > > out: > > return error; > > } > > diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h > > index 81829d19596e..8847ffd29777 100644 > > --- a/fs/xfs/xfs_mount.h > > +++ b/fs/xfs/xfs_mount.h > > @@ -139,6 +139,7 @@ typedef struct xfs_mount { > > bool m_fail_unmount; > > bool m_finobt_nores; /* no per-AG finobt resv. */ > > bool m_update_sb; /* sb needs update in mount */ > > + bool m_has_agresv; /* perag reservations active */ > > > > /* > > * Bitsets of per-fs metadata that have been checked and/or are sick. > > -- > > 2.26.3 > > > ^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2021-04-20 16:23 UTC | newest] Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2021-03-18 16:17 [PATCH v3 0/2] xfs: set aside allocation btree blocks from block reservation Brian Foster 2021-03-18 16:17 ` [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active Brian Foster 2021-03-18 20:55 ` Dave Chinner 2021-03-18 22:19 ` Darrick J. Wong 2021-03-19 1:05 ` Dave Chinner 2021-03-19 1:34 ` Darrick J. Wong 2021-03-19 14:54 ` Brian Foster 2021-03-19 1:43 ` Dave Chinner 2021-03-19 1:48 ` Darrick J. Wong 2021-03-19 2:08 ` Dave Chinner 2021-03-19 14:54 ` Brian Foster 2021-03-23 22:40 ` Dave Chinner 2021-03-24 14:24 ` Brian Foster 2021-03-18 16:17 ` [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation Brian Foster 2021-03-18 20:31 ` Darrick J. Wong 2021-03-19 15:00 ` Brian Foster 2021-03-27 1:34 ` Darrick J. Wong 2021-03-27 14:51 ` Brian Foster 2021-04-09 14:17 ` [PATCH v3 0/2] " Brian Foster 2021-04-12 13:30 [PATCH v3 REPOST " Brian Foster 2021-04-12 13:30 ` [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active Brian Foster 2021-04-14 0:49 ` Darrick J. Wong 2021-04-20 16:22 ` Brian Foster 2021-04-20 16:23 ` Brian Foster
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).