All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] btrfs: handle shrink_delalloc pages calculation differently
@ 2021-06-01 19:45 Josef Bacik
  2021-06-07 18:20 ` David Sterba
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Josef Bacik @ 2021-06-01 19:45 UTC (permalink / raw)
  To: linux-btrfs, kernel-team; +Cc: stable

We have been hitting some early ENOSPC issues in production with more
recent kernels, and I tracked it down to us simply not flushing delalloc
as aggressively as we should be.  With tracing I was seeing us failing
all tickets with all of the block rsvs at or around 0, with very little
pinned space, but still around 120mib of outstanding bytes_may_used.
Upon further investigation I saw that we were flushing around 14 pages
per shrink call for delalloc, despite having around 2gib of delalloc
outstanding.

Consider the example of a 8 way machine, all cpu's trying to create a
file in parallel, which at the time of this commit requires 5 items to
do.  Assuming a 16k leaf size, we have 10mib of total metadata reclaim
size waiting on reservations.  Now assume we have 128mib of delalloc
outstanding.  With our current math we would set items to 20, and then
set to_reclaim to 20 * 256k, or 5mib.

Assuming that we went through this loop all 3 times, for both
FLUSH_DELALLOC and FLUSH_DELALLOC_WAIT, and then did the full loop
twice, we'd only flush 60mib of the 128mib delalloc space.  This could
leave a fair bit of delalloc reservations still hanging around by the
time we go to ENOSPC out all the remaining tickets.

Fix this two ways.  First, change the calculations to be a fraction of
the total delalloc bytes on the system.  Prior to my change we were
calculating based on dirty inodes so our math made more sense, now it's
just completely unrelated to what we're actually doing.

Second add a FLUSH_DELALLOC_FULL state, that we hold off until we've
gone through the flush states at least once.  This will empty the system
of all delalloc so we're sure to be truly out of space when we start
failing tickets.

I'm tagging stable 5.10 and forward, because this is where we started
using the page stuff heavily again.  This affects earlier kernel
versions as well, but would be a pain to backport to them as the
flushing mechanisms aren't the same.

CC: stable@vger.kernel.org # 5.10
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/ctree.h             | 11 ++++++-----
 fs/btrfs/space-info.c        | 36 +++++++++++++++++++++++++++---------
 include/trace/events/btrfs.h |  1 +
 3 files changed, 34 insertions(+), 14 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 5d0398528a7a..20d7121225d9 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -2788,11 +2788,12 @@ enum btrfs_flush_state {
 	FLUSH_DELAYED_REFS	=	4,
 	FLUSH_DELALLOC		=	5,
 	FLUSH_DELALLOC_WAIT	=	6,
-	ALLOC_CHUNK		=	7,
-	ALLOC_CHUNK_FORCE	=	8,
-	RUN_DELAYED_IPUTS	=	9,
-	COMMIT_TRANS		=	10,
-	FORCE_COMMIT_TRANS	=	11,
+	FLUSH_DELALLOC_FULL	=	7,
+	ALLOC_CHUNK		=	8,
+	ALLOC_CHUNK_FORCE	=	9,
+	RUN_DELAYED_IPUTS	=	10,
+	COMMIT_TRANS		=	11,
+	FORCE_COMMIT_TRANS	=	12,
 };
 
 int btrfs_subvolume_reserve_metadata(struct btrfs_root *root,
diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
index 42d0fa2092d4..fc329aff478f 100644
--- a/fs/btrfs/space-info.c
+++ b/fs/btrfs/space-info.c
@@ -505,6 +505,10 @@ static void shrink_delalloc(struct btrfs_fs_info *fs_info,
 	long time_left;
 	int loops;
 
+	delalloc_bytes = percpu_counter_sum_positive(
+						&fs_info->delalloc_bytes);
+	ordered_bytes = percpu_counter_sum_positive(&fs_info->ordered_bytes);
+
 	/* Calc the number of the pages we need flush for space reservation */
 	if (to_reclaim == U64_MAX) {
 		items = U64_MAX;
@@ -512,19 +516,21 @@ static void shrink_delalloc(struct btrfs_fs_info *fs_info,
 		/*
 		 * to_reclaim is set to however much metadata we need to
 		 * reclaim, but reclaiming that much data doesn't really track
-		 * exactly, so increase the amount to reclaim by 2x in order to
-		 * make sure we're flushing enough delalloc to hopefully reclaim
-		 * some metadata reservations.
+		 * exactly.  What we really want to do is reclaim full inode's
+		 * worth of reservations, however that's not available to us
+		 * here.  We will take a fraction of the delalloc bytes for our
+		 * flushing loops and hope for the best.  Delalloc will expand
+		 * the amount we write to cover an entire dirty extent, which
+		 * will reclaim the metadata reservation for that range.  If
+		 * it's not enough subsequent flush stages will be more
+		 * aggressive.
 		 */
+		to_reclaim = max(to_reclaim, delalloc_bytes >> 3);
 		items = calc_reclaim_items_nr(fs_info, to_reclaim) * 2;
-		to_reclaim = items * EXTENT_SIZE_PER_ITEM;
 	}
 
 	trans = (struct btrfs_trans_handle *)current->journal_info;
 
-	delalloc_bytes = percpu_counter_sum_positive(
-						&fs_info->delalloc_bytes);
-	ordered_bytes = percpu_counter_sum_positive(&fs_info->ordered_bytes);
 	if (delalloc_bytes == 0 && ordered_bytes == 0)
 		return;
 
@@ -710,8 +716,11 @@ static void flush_space(struct btrfs_fs_info *fs_info,
 		break;
 	case FLUSH_DELALLOC:
 	case FLUSH_DELALLOC_WAIT:
+	case FLUSH_DELALLOC_FULL:
+		if (state == FLUSH_DELALLOC_FULL)
+			num_bytes = U64_MAX;
 		shrink_delalloc(fs_info, space_info, num_bytes,
-				state == FLUSH_DELALLOC_WAIT, for_preempt);
+				state != FLUSH_DELALLOC, for_preempt);
 		break;
 	case FLUSH_DELAYED_REFS_NR:
 	case FLUSH_DELAYED_REFS:
@@ -1037,6 +1046,14 @@ static void btrfs_async_reclaim_metadata_space(struct work_struct *work)
 				commit_cycles--;
 		}
 
+		/*
+		 * We do not want to empty the system of delalloc unless we're
+		 * under heavy pressure, so allow one trip through the flushing
+		 * logic before we start doing a FLUSH_DELALLOC_FULL.
+		 */
+		if (flush_state == FLUSH_DELALLOC_FULL && !commit_cycles)
+			flush_state++;
+
 		/*
 		 * We don't want to force a chunk allocation until we've tried
 		 * pretty hard to reclaim space.  Think of the case where we
@@ -1219,7 +1236,7 @@ static void btrfs_preempt_reclaim_metadata_space(struct work_struct *work)
  *   so if we now have space to allocate do the force chunk allocation.
  */
 static const enum btrfs_flush_state data_flush_states[] = {
-	FLUSH_DELALLOC_WAIT,
+	FLUSH_DELALLOC_FULL,
 	RUN_DELAYED_IPUTS,
 	FLUSH_DELAYED_REFS,
 	COMMIT_TRANS,
@@ -1309,6 +1326,7 @@ static const enum btrfs_flush_state evict_flush_states[] = {
 	FLUSH_DELAYED_REFS,
 	FLUSH_DELALLOC,
 	FLUSH_DELALLOC_WAIT,
+	FLUSH_DELALLOC_FULL,
 	ALLOC_CHUNK,
 	COMMIT_TRANS,
 };
diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h
index 76e0be7e14d0..8144b8e345b5 100644
--- a/include/trace/events/btrfs.h
+++ b/include/trace/events/btrfs.h
@@ -94,6 +94,7 @@ struct btrfs_space_info;
 	EM( FLUSH_DELAYED_ITEMS,	"FLUSH_DELAYED_ITEMS")		\
 	EM( FLUSH_DELALLOC,		"FLUSH_DELALLOC")		\
 	EM( FLUSH_DELALLOC_WAIT,	"FLUSH_DELALLOC_WAIT")		\
+	EM( FLUSH_DELALLOC_FULL,	"FLUSH_DELALLOC_FULL")		\
 	EM( FLUSH_DELAYED_REFS_NR,	"FLUSH_DELAYED_REFS_NR")	\
 	EM( FLUSH_DELAYED_REFS,		"FLUSH_ELAYED_REFS")		\
 	EM( ALLOC_CHUNK,		"ALLOC_CHUNK")			\
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] btrfs: handle shrink_delalloc pages calculation differently
  2021-06-01 19:45 [PATCH] btrfs: handle shrink_delalloc pages calculation differently Josef Bacik
@ 2021-06-07 18:20 ` David Sterba
  2021-06-07 18:44 ` David Sterba
  2021-06-22 11:16 ` David Sterba
  2 siblings, 0 replies; 6+ messages in thread
From: David Sterba @ 2021-06-07 18:20 UTC (permalink / raw)
  To: Josef Bacik; +Cc: linux-btrfs, kernel-team, stable

Is there a better description of the change? I don't find the
'differently' helpful, could it be something like "split delalloc flush
waiting states"?

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] btrfs: handle shrink_delalloc pages calculation differently
  2021-06-01 19:45 [PATCH] btrfs: handle shrink_delalloc pages calculation differently Josef Bacik
  2021-06-07 18:20 ` David Sterba
@ 2021-06-07 18:44 ` David Sterba
  2021-06-22 11:16 ` David Sterba
  2 siblings, 0 replies; 6+ messages in thread
From: David Sterba @ 2021-06-07 18:44 UTC (permalink / raw)
  To: Josef Bacik; +Cc: linux-btrfs, kernel-team, stable

On Tue, Jun 01, 2021 at 03:45:08PM -0400, Josef Bacik wrote:
> We have been hitting some early ENOSPC issues in production with more
> recent kernels, and I tracked it down to us simply not flushing delalloc
> as aggressively as we should be.  With tracing I was seeing us failing
> all tickets with all of the block rsvs at or around 0, with very little
> pinned space, but still around 120mib of outstanding bytes_may_used.
> Upon further investigation I saw that we were flushing around 14 pages
> per shrink call for delalloc, despite having around 2gib of delalloc
> outstanding.
> 
> Consider the example of a 8 way machine, all cpu's trying to create a
> file in parallel, which at the time of this commit requires 5 items to
> do.  Assuming a 16k leaf size, we have 10mib of total metadata reclaim
> size waiting on reservations.  Now assume we have 128mib of delalloc
> outstanding.  With our current math we would set items to 20, and then
> set to_reclaim to 20 * 256k, or 5mib.
> 
> Assuming that we went through this loop all 3 times, for both
> FLUSH_DELALLOC and FLUSH_DELALLOC_WAIT, and then did the full loop
> twice, we'd only flush 60mib of the 128mib delalloc space.  This could
> leave a fair bit of delalloc reservations still hanging around by the
> time we go to ENOSPC out all the remaining tickets.
> 
> Fix this two ways.  First, change the calculations to be a fraction of
> the total delalloc bytes on the system.  Prior to my change we were
> calculating based on dirty inodes so our math made more sense, now it's
> just completely unrelated to what we're actually doing.
> 
> Second add a FLUSH_DELALLOC_FULL state, that we hold off until we've
> gone through the flush states at least once.  This will empty the system
> of all delalloc so we're sure to be truly out of space when we start
> failing tickets.
> 
> I'm tagging stable 5.10 and forward, because this is where we started
> using the page stuff heavily again.  This affects earlier kernel
> versions as well, but would be a pain to backport to them as the
> flushing mechanisms aren't the same.

For 5.10 it depends on f00c42dd4cc8b856e6 ("btrfs: introduce a
FORCE_COMMIT_TRANS flush operation") and is followed by the premptive
flushing series. Prior to the commit introducing COMMIT_TRANS there are
3 patches that seem lightweight enough for stable backport to 5.10 but
that should be evaluated first.

5.11.x stable is EOL, so 5.12 is ok to pick it but in case there's
interest to backport it to 5.10, more work is needed than just tagging.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] btrfs: handle shrink_delalloc pages calculation differently
  2021-06-01 19:45 [PATCH] btrfs: handle shrink_delalloc pages calculation differently Josef Bacik
  2021-06-07 18:20 ` David Sterba
  2021-06-07 18:44 ` David Sterba
@ 2021-06-22 11:16 ` David Sterba
  2021-06-22 11:25   ` David Sterba
  2 siblings, 1 reply; 6+ messages in thread
From: David Sterba @ 2021-06-22 11:16 UTC (permalink / raw)
  To: Josef Bacik; +Cc: linux-btrfs, kernel-team, stable

On Tue, Jun 01, 2021 at 03:45:08PM -0400, Josef Bacik wrote:
> We have been hitting some early ENOSPC issues in production with more
> recent kernels, and I tracked it down to us simply not flushing delalloc
> as aggressively as we should be.  With tracing I was seeing us failing
> all tickets with all of the block rsvs at or around 0, with very little
> pinned space, but still around 120mib of outstanding bytes_may_used.
> Upon further investigation I saw that we were flushing around 14 pages
> per shrink call for delalloc, despite having around 2gib of delalloc
> outstanding.
> 
> Consider the example of a 8 way machine, all cpu's trying to create a
> file in parallel, which at the time of this commit requires 5 items to
> do.  Assuming a 16k leaf size, we have 10mib of total metadata reclaim
> size waiting on reservations.  Now assume we have 128mib of delalloc
> outstanding.  With our current math we would set items to 20, and then
> set to_reclaim to 20 * 256k, or 5mib.
> 
> Assuming that we went through this loop all 3 times, for both
> FLUSH_DELALLOC and FLUSH_DELALLOC_WAIT, and then did the full loop
> twice, we'd only flush 60mib of the 128mib delalloc space.  This could
> leave a fair bit of delalloc reservations still hanging around by the
> time we go to ENOSPC out all the remaining tickets.
> 
> Fix this two ways.  First, change the calculations to be a fraction of
> the total delalloc bytes on the system.  Prior to my change we were
> calculating based on dirty inodes so our math made more sense, now it's
> just completely unrelated to what we're actually doing.
> 
> Second add a FLUSH_DELALLOC_FULL state, that we hold off until we've
> gone through the flush states at least once.  This will empty the system
> of all delalloc so we're sure to be truly out of space when we start
> failing tickets.
> 
> I'm tagging stable 5.10 and forward, because this is where we started
> using the page stuff heavily again.  This affects earlier kernel
> versions as well, but would be a pain to backport to them as the
> flushing mechanisms aren't the same.
> 
> CC: stable@vger.kernel.org # 5.10
> Signed-off-by: Josef Bacik <josef@toxicpanda.com>

As this is going to be resent, I'll remove it from misc-next for now.
Updated version can go in as a fix after rc1.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] btrfs: handle shrink_delalloc pages calculation differently
  2021-06-22 11:16 ` David Sterba
@ 2021-06-22 11:25   ` David Sterba
  2021-06-22 13:04     ` David Sterba
  0 siblings, 1 reply; 6+ messages in thread
From: David Sterba @ 2021-06-22 11:25 UTC (permalink / raw)
  To: David Sterba; +Cc: Josef Bacik, linux-btrfs, kernel-team, stable

On Tue, Jun 22, 2021 at 01:16:04PM +0200, David Sterba wrote:
> On Tue, Jun 01, 2021 at 03:45:08PM -0400, Josef Bacik wrote:
> > We have been hitting some early ENOSPC issues in production with more
> > recent kernels, and I tracked it down to us simply not flushing delalloc
> > as aggressively as we should be.  With tracing I was seeing us failing
> > all tickets with all of the block rsvs at or around 0, with very little
> > pinned space, but still around 120mib of outstanding bytes_may_used.
> > Upon further investigation I saw that we were flushing around 14 pages
> > per shrink call for delalloc, despite having around 2gib of delalloc
> > outstanding.
> > 
> > Consider the example of a 8 way machine, all cpu's trying to create a
> > file in parallel, which at the time of this commit requires 5 items to
> > do.  Assuming a 16k leaf size, we have 10mib of total metadata reclaim
> > size waiting on reservations.  Now assume we have 128mib of delalloc
> > outstanding.  With our current math we would set items to 20, and then
> > set to_reclaim to 20 * 256k, or 5mib.
> > 
> > Assuming that we went through this loop all 3 times, for both
> > FLUSH_DELALLOC and FLUSH_DELALLOC_WAIT, and then did the full loop
> > twice, we'd only flush 60mib of the 128mib delalloc space.  This could
> > leave a fair bit of delalloc reservations still hanging around by the
> > time we go to ENOSPC out all the remaining tickets.
> > 
> > Fix this two ways.  First, change the calculations to be a fraction of
> > the total delalloc bytes on the system.  Prior to my change we were
> > calculating based on dirty inodes so our math made more sense, now it's
> > just completely unrelated to what we're actually doing.
> > 
> > Second add a FLUSH_DELALLOC_FULL state, that we hold off until we've
> > gone through the flush states at least once.  This will empty the system
> > of all delalloc so we're sure to be truly out of space when we start
> > failing tickets.
> > 
> > I'm tagging stable 5.10 and forward, because this is where we started
> > using the page stuff heavily again.  This affects earlier kernel
> > versions as well, but would be a pain to backport to them as the
> > flushing mechanisms aren't the same.
> > 
> > CC: stable@vger.kernel.org # 5.10
> > Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> 
> As this is going to be resent, I'll remove it from misc-next for now.
> Updated version can go in as a fix after rc1.

Ok so that does not work, the patchset "[PATCH 0/4][v2] btrfs: commit
the transaction unconditionally for ensopc"
https://lore.kernel.org/linux-btrfs/cover.1623421213.git.josef@toxicpanda.com/
touches the defines and can't be trivially resolved.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] btrfs: handle shrink_delalloc pages calculation differently
  2021-06-22 11:25   ` David Sterba
@ 2021-06-22 13:04     ` David Sterba
  0 siblings, 0 replies; 6+ messages in thread
From: David Sterba @ 2021-06-22 13:04 UTC (permalink / raw)
  To: dsterba, Josef Bacik, linux-btrfs, kernel-team, stable

On Tue, Jun 22, 2021 at 01:25:50PM +0200, David Sterba wrote:
> On Tue, Jun 22, 2021 at 01:16:04PM +0200, David Sterba wrote:
> > On Tue, Jun 01, 2021 at 03:45:08PM -0400, Josef Bacik wrote:
> > As this is going to be resent, I'll remove it from misc-next for now.
> > Updated version can go in as a fix after rc1.
> 
> Ok so that does not work, the patchset "[PATCH 0/4][v2] btrfs: commit
> the transaction unconditionally for ensopc"
> https://lore.kernel.org/linux-btrfs/cover.1623421213.git.josef@toxicpanda.com/
> touches the defines and can't be trivially resolved.

Nikolay was kind to resolve the conflict so the final status is that
"btrfs: handle shrink_delalloc pages calculation differently" has been
removed from misc-next (due to known performance drop) and the refreshed
patchset is now in misc-next.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-06-22 13:07 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-01 19:45 [PATCH] btrfs: handle shrink_delalloc pages calculation differently Josef Bacik
2021-06-07 18:20 ` David Sterba
2021-06-07 18:44 ` David Sterba
2021-06-22 11:16 ` David Sterba
2021-06-22 11:25   ` David Sterba
2021-06-22 13:04     ` David Sterba

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.