All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFCv2 0/3] XFS near block allocation algorithm prototype
@ 2019-03-07 17:24 Brian Foster
  2019-03-07 17:24 ` [PATCH RFCv2 1/3] xfs: refactor small allocation helper to skip cntbt attempt Brian Foster
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Brian Foster @ 2019-03-07 17:24 UTC (permalink / raw)
  To: linux-xfs

Hi all,

Here's an RFCv2 of the near block allocation algorithm rework. The
changes since the first rfc are primarily to clean out a bunch of
unnecessary refactoring, further improvements/cleanups to the code and
the addition of minlen support.

While I ran some basic performance tests against the initial version as
I was fleshing out the high level algorithm, this has mostly only seen
functional testing since then. Hence this is still RFC until I've had a
chance to do more thorough testing. The limited performance testing that
I have run consists of a simple single-writer fs_mark test against the
pre-fragmented metadump image provided by the user. A single threaded
fs_mark writes in the range of 30-100 files/sec (150k sized files) on a
baseline kernel and 550-800 files/sec with the updated allocator
algorithm. The latter range is about the same as the result for both
kernels against a newly created filesystem on the same system. More
testing is certainly required, but I'd like to solicit high level
design feedback before getting too far down the targeted testing path,
including any thoughts on specific performance tests that might be
worthwhile.

The background and motivation for this change is described in the patch
2 commit log as well as in the original thread referenced in the rfcv1
cover letter. The original idea was to replace the current allocator
with a single locality-enhanced cntbt lookup. This refines that idea to
something that combines the cntbt and bnobt searches in a manner that is
optimal for the size of the request (for example, a single fsb near
alloc is a trivial bnobt lookup) and repeats the cntbt lookup to provide
a bit more locality.

While this is written as new code, this is really based on the existing
allocation algorithm and rewritten in a way to facilitate reuse for the
other allocation modes. The longer term objective beyond improving worst
case near mode allocation behavior is to be able to reuse this code for
all of the near, exact and size allocation modes. The latter bits will
likely occur as follow on work.

Note that this series should technically only consist of two patches.
Patch 1 is a dependency fixup for the small allocation fallback. Patches
2 and 3 are artificially split into two to improve readability in patch
form. Patch 2 adds the new mechanism and patch 3 changes the near
allocation path to use it. Thoughts, reviews, flames appreciated.

Brian

rfcv2:
- Dropped spurious initial refactoring.
- Added minlen functionality.
- Properly tied into near alloc path.
- General refactoring and cleanups.
rfcv1: https://marc.info/?l=linux-xfs&m=154479089914351&w=2

Brian Foster (3):
  xfs: refactor small allocation helper to skip cntbt attempt
  xfs: introduce generic extent allocation infrastructure
  xfs: use generic extent alloc mechanism for near mode allocs

 fs/xfs/libxfs/xfs_alloc.c | 928 +++++++++++++++++++-------------------
 fs/xfs/xfs_trace.h        |  26 +-
 2 files changed, 493 insertions(+), 461 deletions(-)

-- 
2.17.2

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH RFCv2 1/3] xfs: refactor small allocation helper to skip cntbt attempt
  2019-03-07 17:24 [PATCH RFCv2 0/3] XFS near block allocation algorithm prototype Brian Foster
@ 2019-03-07 17:24 ` Brian Foster
  2019-03-07 17:24 ` [PATCH RFCv2 2/3] xfs: introduce generic extent allocation infrastructure Brian Foster
  2019-03-07 17:24 ` [PATCH RFCv2 3/3] xfs: use generic extent alloc mechanism for near mode allocs Brian Foster
  2 siblings, 0 replies; 4+ messages in thread
From: Brian Foster @ 2019-03-07 17:24 UTC (permalink / raw)
  To: linux-xfs

The small allocation helper is implemented in a way that is fairly
tightly integrated to the existing allocation algorithms. It expects
a cntbt cursor beyond the end of the tree, attempts to locate the
last record in the tree and only attempts an AGFL allocation if the
cntbt is empty.

The generic algorithm doesn't guarantee state or pass along context
for this helper to determine the state of the cntbt. It will only
call this function when the cntbt doesn't have a big enough extent
or is empty. Therefore, tweak xfs_alloc_ag_vextent_small() to allow
for a NULL cntbt cursor and skip the cntbt logic. Instead, consider
allocation from the AGFL or fail the allocation.

Signed-off-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/libxfs/xfs_alloc.c | 52 +++++++++++++++++++--------------------
 1 file changed, 26 insertions(+), 26 deletions(-)

diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
index bc3367b8b7bb..10a6e22b764d 100644
--- a/fs/xfs/libxfs/xfs_alloc.c
+++ b/fs/xfs/libxfs/xfs_alloc.c
@@ -1589,33 +1589,36 @@ xfs_alloc_ag_vextent_size(
  */
 STATIC int			/* error */
 xfs_alloc_ag_vextent_small(
-	xfs_alloc_arg_t	*args,	/* allocation argument structure */
-	xfs_btree_cur_t	*ccur,	/* by-size cursor */
-	xfs_agblock_t	*fbnop,	/* result block number */
-	xfs_extlen_t	*flenp,	/* result length */
-	int		*stat)	/* status: 0-freelist, 1-normal/none */
+	struct xfs_alloc_arg	*args,	/* allocation argument structure */
+	struct xfs_btree_cur	*ccur,	/* optional by-size cursor */
+	xfs_agblock_t		*fbnop,	/* result block number */
+	xfs_extlen_t		*flenp,	/* result length */
+	int			*stat)	/* status: 0-freelist, 1-normal/none */
 {
-	int		error;
-	xfs_agblock_t	fbno;
-	xfs_extlen_t	flen;
-	int		i;
+	int			error = 0;
+	xfs_agblock_t		fbno;
+	xfs_extlen_t		flen;
+	int			i = 0;
 
-	if ((error = xfs_btree_decrement(ccur, 0, &i)))
+	/*
+	 * If a cntbt cursor is provided, try to allocate the largest record in
+	 * the tree. Try the AGFL if the cntbt is empty, otherwise fail the
+	 * allocation. Make sure to respect minleft even when pulling from the
+	 * freelist.
+	 */
+	if (ccur)
+		error = xfs_btree_decrement(ccur, 0, &i);
+	if (error)
 		goto error0;
 	if (i) {
-		if ((error = xfs_alloc_get_rec(ccur, &fbno, &flen, &i)))
+		error = xfs_alloc_get_rec(ccur, &fbno, &flen, &i);
+		if (error)
 			goto error0;
 		XFS_WANT_CORRUPTED_GOTO(args->mp, i == 1, error0);
-	}
-	/*
-	 * Nothing in the btree, try the freelist.  Make sure
-	 * to respect minleft even when pulling from the
-	 * freelist.
-	 */
-	else if (args->minlen == 1 && args->alignment == 1 &&
-		 args->resv != XFS_AG_RESV_AGFL &&
-		 (be32_to_cpu(XFS_BUF_TO_AGF(args->agbp)->agf_flcount)
-		  > args->minleft)) {
+	} else if (args->minlen == 1 && args->alignment == 1 &&
+		   args->resv != XFS_AG_RESV_AGFL &&
+		   (be32_to_cpu(XFS_BUF_TO_AGF(args->agbp)->agf_flcount) >
+		    args->minleft)) {
 		error = xfs_alloc_get_freelist(args->tp, args->agbp, &fbno, 0);
 		if (error)
 			goto error0;
@@ -1661,14 +1664,11 @@ xfs_alloc_ag_vextent_small(
 		 */
 		else
 			flen = 0;
-	}
-	/*
-	 * Can't allocate from the freelist for some reason.
-	 */
-	else {
+	} else {
 		fbno = NULLAGBLOCK;
 		flen = 0;
 	}
+
 	/*
 	 * Can't do the allocation, give up.
 	 */
-- 
2.17.2

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH RFCv2 2/3] xfs: introduce generic extent allocation infrastructure
  2019-03-07 17:24 [PATCH RFCv2 0/3] XFS near block allocation algorithm prototype Brian Foster
  2019-03-07 17:24 ` [PATCH RFCv2 1/3] xfs: refactor small allocation helper to skip cntbt attempt Brian Foster
@ 2019-03-07 17:24 ` Brian Foster
  2019-03-07 17:24 ` [PATCH RFCv2 3/3] xfs: use generic extent alloc mechanism for near mode allocs Brian Foster
  2 siblings, 0 replies; 4+ messages in thread
From: Brian Foster @ 2019-03-07 17:24 UTC (permalink / raw)
  To: linux-xfs

The extent allocation code in XFS has several allocation modes with
unique implementations. This is slightly unfortunate as the
allocation modes are not all that different from a high level
perspective. The most involved mode is the near allocation mode
which attempts to allocate an optimally sized extent with ideal
locality with respect to a provided agbno.

In the common case of suitable extent availability, a near mode
allocation consists of a conditional scan of the last cntbt block
followed by a concurrent left and right spanning search of the bnobt
starting from the ideal point of locality in the bnobt. This works
reasonably well as filesystems age via most common allocation
patterns. If free space fragments as the filesystem ages, however,
the near algorithm has very poor breakdown characteristics. If the
extent size lookup happens to land outside of the last cntbt block,
the alloc bypasses the cntbt entirely. If the target extent lies
beyond a large enough number of unusable extents from the starting
point(s) of the bnobt search, the bnobt search can take a
significant amount of time to locate the target extent. This leads
to pathological allocation latencies in certain workloads.

The near allocation algorithm can be fundamentally improved to take
advantage of a preexisting mechanism: that by-size cntbt record
lookups can incorporate locality. This means that a single cntbt
lookup can return the extent with ideal locality (with respect to an
agbno param) of a particular size. This means that for larger extent
allocations, repeated lookups of the cntbt with an agbno hint can be
far more reliable and efficient than a brute force bnobt search. Such
a cntbt search may not always find the extent with absolute best
locality, but the tradeoff for good enough locality for a more
efficient scan is worthwhile because more often than not, extent
contiguity is more important for performance than locality.

Introduce a new allocation mechanism based on the existing near
allocation mode with the aforementioned tweaks. The new mechanism
initiates concurrent bnobt spanning and cntbt lookup searches to
naturally balance the optimal approach for smaller vs. larger
allocation requests. For example, smaller allocation requests are
highly likely to be satisfied by the bnobt search. The larger the
allocation request, the more likely the cntbt lookup search locates
the ideal extent. Once the cntbt search locates at least one
suitable allocation candidate, the remaining search for better
locality is boosted to throttle bnobt scans and the overall latency
of the allocation.

Implement the new mechanism with a generic interface and code
factoring to facilitate reuse for other allocation modes. For
example, the exact and size allocation modes are a subset of this
implementation. It should be possible to perform such allocations in
the future with fairly isolated logic changes to incorporate the
different allocation requirements.

Note that this patch introduces mechanism only and makes no
functional changes. [XXX: Squash with follow on patch to address
static function warnings, etc.].

Signed-off-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/libxfs/xfs_alloc.c | 449 ++++++++++++++++++++++++++++++++++++++
 fs/xfs/xfs_trace.h        |  22 ++
 2 files changed, 471 insertions(+)

diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
index 10a6e22b764d..2f09d71ea909 100644
--- a/fs/xfs/libxfs/xfs_alloc.c
+++ b/fs/xfs/libxfs/xfs_alloc.c
@@ -981,6 +981,455 @@ xfs_alloc_find_best_extent(
 	return error;
 }
 
+/*
+ * NEAR allocation algorithm and data structures.
+ */
+struct xfs_best_cur {
+	struct xfs_btree_cur	*cur;		/* cursor */
+	bool			done;		/* search done flag */
+};
+
+struct xfs_alloc_best {
+	struct xfs_best_cur	cnt;		/* btree cursors */
+	struct xfs_best_cur	bnolt;
+	struct xfs_best_cur	bnogt;
+	xfs_extlen_t		cur_len;	/* current search length */
+	xfs_agblock_t		rec_bno;	/* extent startblock */
+	xfs_extlen_t		rec_len;	/* extent length */
+	xfs_agblock_t		bno;		/* alloc bno */
+	xfs_extlen_t		len;		/* alloc len */
+	xfs_extlen_t		diff;		/* diff from search bno */
+	unsigned		busy_gen;	/* busy state */
+	bool			busy;
+};
+
+/*
+ * Set up the cursors, etc. in the best extent allocation tracking structure.
+ * This function can be called multiple times to reset an initialized structure
+ * without having to reallocate cursors.
+ */
+static int
+xfs_alloc_best_setup(
+	struct xfs_alloc_arg	*args,
+	struct xfs_alloc_best	*best)
+{
+	int			error;
+	int			i;
+
+	best->cnt.done = best->bnolt.done = best->bnogt.done = true;
+	best->cur_len = args->maxlen;
+	best->rec_bno = 0;
+	best->rec_len = 0;
+	best->bno = 0;
+	best->len = 0;
+	best->diff = -1;
+	best->busy = false;
+	best->busy_gen = 0;
+
+	/*
+	 * Initialize the cntbt cursor and determine whether to start the search
+	 * at maxlen or minlen.
+	 */
+	if (!best->cnt.cur)
+		best->cnt.cur = xfs_allocbt_init_cursor(args->mp, args->tp,
+					args->agbp, args->agno, XFS_BTNUM_CNT);
+	error = xfs_alloc_lookup_ge(best->cnt.cur, args->agbno, best->cur_len,
+				    &i);
+	if (!i) {
+		best->cur_len = args->minlen;
+		error = xfs_alloc_lookup_ge(best->cnt.cur, args->agbno,
+					    best->cur_len, &i);
+		if (error)
+			return error;
+	}
+	if (i)
+		best->cnt.done = false;
+
+	/*
+	 * Initialize bnobt left/right search cursors. Mark the cursor done if
+	 * either lands at the associated end of the tree.
+	 */
+	if (!best->bnolt.cur)
+		best->bnolt.cur = xfs_allocbt_init_cursor(args->mp, args->tp,
+					args->agbp, args->agno, XFS_BTNUM_BNO);
+	error = xfs_alloc_lookup_le(best->bnolt.cur, args->agbno, args->maxlen,
+				    &i);
+	if (error)
+		return error;
+	if (i)
+		best->bnolt.done = false;
+
+	if (!best->bnogt.cur)
+		best->bnogt.cur = xfs_allocbt_init_cursor(args->mp, args->tp,
+					args->agbp, args->agno, XFS_BTNUM_BNO);
+	error = xfs_alloc_lookup_ge(best->bnogt.cur, args->agbno, args->maxlen,
+				    &i);
+	if (error)
+		return error;
+	if (i)
+		best->bnogt.done = false;
+
+	return 0;
+}
+
+static void
+xfs_alloc_best_close(
+	struct xfs_alloc_best	*best,
+	bool			error)
+{
+	int			cur_error = XFS_BTREE_NOERROR;
+
+	if (error)
+		cur_error = XFS_BTREE_ERROR;
+
+	if (best->cnt.cur)
+		xfs_btree_del_cursor(best->cnt.cur, cur_error);
+	if (best->bnolt.cur)
+		xfs_btree_del_cursor(best->bnolt.cur, cur_error);
+	if (best->bnogt.cur)
+		xfs_btree_del_cursor(best->bnogt.cur, cur_error);
+	best->cnt.cur = best->bnolt.cur = best->bnogt.cur = NULL;
+}
+
+/*
+ * Consider an extent for allocation. If the provided extent fits allocation
+ * criteria and has better locality than the current candidate, store it in the
+ * best extent tracking structure. Mark the search cursor done if it has entered
+ * an out of range state based on allocation criteria. Returns true if the
+ * provided extent has been assigned as the new allocation candidate, false
+ * otherwise.
+ */
+static bool
+xfs_alloc_best_check(
+	struct xfs_alloc_arg	*args,
+	xfs_agblock_t		bno,
+	xfs_extlen_t		len,
+	struct xfs_alloc_best	*best,
+	struct xfs_best_cur	*bcur)
+{
+	xfs_agblock_t		bnoa, bnew;
+	xfs_extlen_t		lena, diff = -1;
+	bool			busy;
+	unsigned		busy_gen = 0;
+	bool			badrange = false;
+	bool			isbnobt = bcur->cur->bc_btnum == XFS_BTNUM_BNO;
+
+	/*
+	 * Check against minlen. If this is a cntbt cursor, it has gone out of
+	 * range. This should only happen when walking backwards.
+	 */
+	if (len < args->minlen) {
+		badrange = !isbnobt;
+		goto fail;
+	}
+
+	/*
+	 * Compute aligned extent and check length and range. If this is a bnobt
+	 * record that violates the range criteria, mark the bnobt cursor done.
+	 */
+	busy = xfs_alloc_compute_aligned(args, bno, len, &bnoa, &lena,
+					 &busy_gen);
+	best->busy |= busy;
+	if (busy)
+		best->busy_gen = busy_gen;
+	if (bnoa < args->min_agbno || bnoa > args->max_agbno) {
+		badrange = isbnobt;
+		goto fail;
+	}
+	if (lena < args->minlen)
+		goto fail;
+
+	args->len = XFS_EXTLEN_MIN(lena, args->maxlen);
+	xfs_alloc_fix_len(args);
+	ASSERT(args->len >= args->minlen);
+	if (args->len < best->len)
+		goto fail;
+
+	/*
+	 * If we have a usable extent, compare it to the current allocation
+	 * candidate. Mark the cursor done if this is a bnobt cursor with worse
+	 * locality.
+	 */
+	diff = xfs_alloc_compute_diff(args->agbno, args->len,
+				      args->alignment, args->datatype,
+				      bnoa, lena, &bnew);
+	if (bnew == NULLAGBLOCK)
+		goto fail;
+	if (diff > best->diff) {
+		badrange = isbnobt;
+		goto fail;
+	}
+	if (args->len < best->len)
+		goto fail;
+
+	/* new candidate extent */
+	best->rec_bno = bno;
+	best->rec_len = len;
+	best->bno = bnew;
+	best->len = args->len;
+	best->diff = diff;
+	trace_xfs_alloc_best_new(args->mp, best->bno, best->len, best->diff);
+	return true;
+
+fail:
+	if (badrange)
+		bcur->done = true;
+	return false;
+}
+
+/*
+ * Perform an allocation of a candidate extent. Remove the extent from both
+ * trees and update the caller's allocation structure.
+ */
+STATIC int
+xfs_alloc_best_finish(
+	struct xfs_alloc_arg	*args,
+	struct xfs_alloc_best	*best,
+	int			*stat)
+{
+	int			error;
+
+	ASSERT(best->len);
+	ASSERT(best->cnt.cur && best->bnolt.cur);
+
+	error = xfs_alloc_fixup_trees(best->cnt.cur, best->bnolt.cur,
+				      best->rec_bno, best->rec_len, best->bno,
+				      best->len, 0);
+	if (error)
+		return error;
+
+	args->agbno = best->bno;
+	args->len = best->len;
+	args->wasfromfl = 0;
+	*stat = 1;
+
+	trace_xfs_alloc_best(args);
+	return 0;
+}
+
+/*
+ * Perform an iterative by-size lookup based on the allocation request. This
+ * generally expects a cntbt cursor and uses the bno optimized lookup to find a
+ * suitably sized extent with ideal locality.
+ */
+STATIC int
+xfs_alloc_lookup_iter(
+	struct xfs_alloc_arg	*args,
+	struct xfs_alloc_best	*best,
+	struct xfs_best_cur	*bcur)
+{
+	struct xfs_btree_cur	*cur = bcur->cur;
+	xfs_agblock_t		bno;
+	xfs_extlen_t		len, cur_len;
+	int			error;
+	int			i;
+
+	/*
+	 * Store the current search key and perform a locality optimized lookup.
+	 */
+	cur_len = best->cur_len;
+	error = xfs_alloc_lookup_ge(cur, args->agbno, cur_len, &i);
+	if (error)
+		return error;
+	if (i == 0) {
+		bcur->done = true;
+		return 0;
+	}
+	error = xfs_alloc_get_rec(cur, &bno, &len, &i);
+	if (error)
+		return error;
+
+	/*
+	 * Check the record and update the search key to the extent length we
+	 * actually found in the tree.
+	 */
+	XFS_WANT_CORRUPTED_RETURN(args->mp, i == 1);
+	xfs_alloc_best_check(args, bno, len, best, bcur);
+	ASSERT(len >= best->cur_len);
+	best->cur_len = len;
+
+	/*
+	 * We looked up the first record >= [agbno, len] above. If this is a
+	 * cntbt search, the agbno is a secondary key and so the record may not
+	 * start beyond agbno if there are no such records for the particular
+	 * length. If it is past agbno, check the previous record too so long as
+	 * the length matches as it may be closer.
+	 */
+	if (bno > args->agbno) {
+		error = xfs_btree_decrement(cur, 0, &i);
+		if (error)
+			return error;
+		if (i) {
+			error = xfs_alloc_get_rec(cur, &bno, &len, &i);
+			if (error)
+				return error;
+			XFS_WANT_CORRUPTED_RETURN(args->mp, i == 1);
+			if (len == best->cur_len)
+				xfs_alloc_best_check(args, bno, len, best,
+						     bcur);
+		}
+	}
+
+	/*
+	 * Increment the search key if we haven't yet found a candidate extent
+	 * or this lookup already significantly bumped the key. We have to make
+	 * sure to not skip any records until we have at least one allocatable
+	 * extent. Note that if the allocation ultimately fails due to busy
+	 * extents, we'll flush the busy list and restart the whole thing.
+	 *
+	 * Otherwise, double the search key for the next lookup to optimize the
+	 * search. This allows us to find good enough locality at the expense of
+	 * absolute best locality. Max extent size and reasonable allocation
+	 * efficiency are more important here than perfect locality.
+	 */
+	cur_len <<= 1;
+	if (!best->len || best->cur_len >= cur_len)
+		best->cur_len++;
+	else
+		best->cur_len = cur_len;
+
+	return error;
+}
+
+/*
+ * Perform an iterative record at a time walk of a btree to find an allocation
+ * candidate extent. This is generally used for left/right spanning searches of
+ * the bnobt, but this also works on a cntbt cursor for cases where a minimal
+ * number of suitably sized extents are available and a more aggressive search
+ * is required.
+ */
+STATIC int
+xfs_alloc_walk_iter(
+	struct xfs_alloc_arg	*args,
+	struct xfs_alloc_best	*best,
+	struct xfs_best_cur	*bcur,
+	bool			increment,
+	int			iters,
+	int			*stat)
+{
+	struct xfs_btree_cur	*cur;
+	int			error;
+	int			i;
+	xfs_agblock_t		bno;
+	xfs_extlen_t		len;
+	bool			found = false;
+
+	if (bcur->done)
+		return 0;
+
+	*stat = 0;
+	cur = bcur->cur;
+	for (; iters > 0; iters--) {
+		error = xfs_alloc_get_rec(cur, &bno, &len, &i);
+		if (error)
+			return error;
+		XFS_WANT_CORRUPTED_RETURN(args->mp, i == 1);
+		found = xfs_alloc_best_check(args, bno, len, best, bcur);
+		if (found) {
+			*stat = 1;
+			break;
+		}
+		if (bcur->done)
+			break;
+
+		if (increment)
+			error = xfs_btree_increment(cur, 0, &i);
+		else
+			error = xfs_btree_decrement(cur, 0, &i);
+		if (error)
+			return error;
+		if (i == 0) {
+			bcur->done = true;
+			break;
+		}
+	}
+
+	return error;
+}
+
+STATIC int
+xfs_alloc_ag_vextent_best(
+	struct xfs_alloc_arg	*args,
+	struct xfs_alloc_best	*best,
+	int			*stat)
+{
+	int			error;
+	int			i;
+	unsigned int		findbestcount = args->mp->m_alloc_mxr[0];
+
+	ASSERT(best->cnt.cur);
+	*stat = 0;
+
+	/* search as long as we have at least one active cursor */
+	while (!best->cnt.done || !best->bnolt.done || !best->bnogt.done) {
+		/*
+		 * Search the bnobt left and right. If either of these find a
+		 * suitable extent, we know we've found ideal locality. Do a
+		 * capped search in the opposite direction and we're done.
+		 */
+		error = xfs_alloc_walk_iter(args, best, &best->bnolt, false, 1,
+					    &i);
+		if (error)
+			return error;
+		if (i) {
+			error = xfs_alloc_walk_iter(args, best, &best->bnogt,
+						    true, findbestcount, &i);
+			if (error)
+				return error;
+			break;
+		}
+
+		error = xfs_alloc_walk_iter(args, best, &best->bnogt, true, 1,
+					    &i);
+		if (error)
+			return error;
+		if (i) {
+			error = xfs_alloc_walk_iter(args, best, &best->bnolt,
+						    false, findbestcount, &i);
+			if (error)
+				return error;
+			break;
+		}
+
+		/*
+		 * Search the cntbt for a maximum sized extent with ideal
+		 * locality. The lookup search terminates on reaching the end of
+		 * the cntbt. Break out of the loop if this occurs to throttle
+		 * the bnobt scans.
+		 */
+		error = xfs_alloc_lookup_iter(args, best, &best->cnt);
+		if (error)
+			return error;
+		if (best->cnt.done)
+			break;
+	}
+
+	/*
+	 * If the lookup search terminated and we still have no suitable
+	 * allocation, scan the largest extents available in the cntbt as a last
+	 * resort. The cntbt done flag means the cursor points off the end of
+	 * the tree. Point it back to the last record in the tree and walk
+	 * backwards from there.
+	 */
+	if (!best->len && best->cnt.done) {
+		best->cnt.done = false;
+		error = xfs_btree_decrement(best->cnt.cur, 0, &i);
+		if (error)
+			return error;
+		if (i) {
+			error = xfs_alloc_walk_iter(args, best, &best->cnt,
+						    false, findbestcount, &i);
+			if (error)
+				return error;
+		}
+	}
+
+	if (best->len)
+		error = xfs_alloc_best_finish(args, best, stat);
+
+	return error;
+}
+
 /*
  * Allocate a variable extent near bno in the allocation group agno.
  * Extent's length (returned in len) will be between minlen and maxlen,
diff --git a/fs/xfs/xfs_trace.h b/fs/xfs/xfs_trace.h
index 47fb07d86efd..7346646405cd 100644
--- a/fs/xfs/xfs_trace.h
+++ b/fs/xfs/xfs_trace.h
@@ -1642,6 +1642,7 @@ DEFINE_ALLOC_EVENT(xfs_alloc_near_lesser);
 DEFINE_ALLOC_EVENT(xfs_alloc_near_error);
 DEFINE_ALLOC_EVENT(xfs_alloc_near_noentry);
 DEFINE_ALLOC_EVENT(xfs_alloc_near_busy);
+DEFINE_ALLOC_EVENT(xfs_alloc_best);
 DEFINE_ALLOC_EVENT(xfs_alloc_size_neither);
 DEFINE_ALLOC_EVENT(xfs_alloc_size_noentry);
 DEFINE_ALLOC_EVENT(xfs_alloc_size_nominleft);
@@ -1658,6 +1659,27 @@ DEFINE_ALLOC_EVENT(xfs_alloc_vextent_noagbp);
 DEFINE_ALLOC_EVENT(xfs_alloc_vextent_loopfailed);
 DEFINE_ALLOC_EVENT(xfs_alloc_vextent_allfailed);
 
+TRACE_EVENT(xfs_alloc_best_new,
+	TP_PROTO(struct xfs_mount *mp, xfs_agblock_t bno, xfs_extlen_t len,
+		 xfs_extlen_t diff),
+	TP_ARGS(mp, bno, len, diff),
+	TP_STRUCT__entry(
+		__field(dev_t, dev)
+		__field(xfs_agblock_t, bno)
+		__field(xfs_extlen_t, len)
+		__field(xfs_extlen_t, diff)
+	),
+	TP_fast_assign(
+		__entry->dev = mp->m_super->s_dev;
+		__entry->bno = bno;
+		__entry->len = len;
+		__entry->diff = diff;
+	),
+	TP_printk("dev %d:%d bno 0x%x len 0x%x diff 0x%x",
+		  MAJOR(__entry->dev), MINOR(__entry->dev),
+		  __entry->bno, __entry->len, __entry->diff)
+)
+
 DECLARE_EVENT_CLASS(xfs_da_class,
 	TP_PROTO(struct xfs_da_args *args),
 	TP_ARGS(args),
-- 
2.17.2

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH RFCv2 3/3] xfs: use generic extent alloc mechanism for near mode allocs
  2019-03-07 17:24 [PATCH RFCv2 0/3] XFS near block allocation algorithm prototype Brian Foster
  2019-03-07 17:24 ` [PATCH RFCv2 1/3] xfs: refactor small allocation helper to skip cntbt attempt Brian Foster
  2019-03-07 17:24 ` [PATCH RFCv2 2/3] xfs: introduce generic extent allocation infrastructure Brian Foster
@ 2019-03-07 17:24 ` Brian Foster
  2 siblings, 0 replies; 4+ messages in thread
From: Brian Foster @ 2019-03-07 17:24 UTC (permalink / raw)
  To: linux-xfs

Rework the near mode extent allocation algorithm in
xfs_alloc_ag_vextent_near() to use the generic extent allocation
mechanism. The generic mechanism is currently based on the near mode
algorithm with targeted improvements.

[TODO: fold into previous patch]

Signed-off-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/libxfs/xfs_alloc.c | 491 +++-----------------------------------
 fs/xfs/xfs_trace.h        |   4 -
 2 files changed, 28 insertions(+), 467 deletions(-)

diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
index 2f09d71ea909..02fc6df5cc0d 100644
--- a/fs/xfs/libxfs/xfs_alloc.c
+++ b/fs/xfs/libxfs/xfs_alloc.c
@@ -886,101 +886,6 @@ xfs_alloc_ag_vextent_exact(
 	return error;
 }
 
-/*
- * Search the btree in a given direction via the search cursor and compare
- * the records found against the good extent we've already found.
- */
-STATIC int
-xfs_alloc_find_best_extent(
-	struct xfs_alloc_arg	*args,	/* allocation argument structure */
-	struct xfs_btree_cur	**gcur,	/* good cursor */
-	struct xfs_btree_cur	**scur,	/* searching cursor */
-	xfs_agblock_t		gdiff,	/* difference for search comparison */
-	xfs_agblock_t		*sbno,	/* extent found by search */
-	xfs_extlen_t		*slen,	/* extent length */
-	xfs_agblock_t		*sbnoa,	/* aligned extent found by search */
-	xfs_extlen_t		*slena,	/* aligned extent length */
-	int			dir)	/* 0 = search right, 1 = search left */
-{
-	xfs_agblock_t		new;
-	xfs_agblock_t		sdiff;
-	int			error;
-	int			i;
-	unsigned		busy_gen;
-
-	/* The good extent is perfect, no need to  search. */
-	if (!gdiff)
-		goto out_use_good;
-
-	/*
-	 * Look until we find a better one, run out of space or run off the end.
-	 */
-	do {
-		error = xfs_alloc_get_rec(*scur, sbno, slen, &i);
-		if (error)
-			goto error0;
-		XFS_WANT_CORRUPTED_GOTO(args->mp, i == 1, error0);
-		xfs_alloc_compute_aligned(args, *sbno, *slen,
-				sbnoa, slena, &busy_gen);
-
-		/*
-		 * The good extent is closer than this one.
-		 */
-		if (!dir) {
-			if (*sbnoa > args->max_agbno)
-				goto out_use_good;
-			if (*sbnoa >= args->agbno + gdiff)
-				goto out_use_good;
-		} else {
-			if (*sbnoa < args->min_agbno)
-				goto out_use_good;
-			if (*sbnoa <= args->agbno - gdiff)
-				goto out_use_good;
-		}
-
-		/*
-		 * Same distance, compare length and pick the best.
-		 */
-		if (*slena >= args->minlen) {
-			args->len = XFS_EXTLEN_MIN(*slena, args->maxlen);
-			xfs_alloc_fix_len(args);
-
-			sdiff = xfs_alloc_compute_diff(args->agbno, args->len,
-						       args->alignment,
-						       args->datatype, *sbnoa,
-						       *slena, &new);
-
-			/*
-			 * Choose closer size and invalidate other cursor.
-			 */
-			if (sdiff < gdiff)
-				goto out_use_search;
-			goto out_use_good;
-		}
-
-		if (!dir)
-			error = xfs_btree_increment(*scur, 0, &i);
-		else
-			error = xfs_btree_decrement(*scur, 0, &i);
-		if (error)
-			goto error0;
-	} while (i);
-
-out_use_good:
-	xfs_btree_del_cursor(*scur, XFS_BTREE_NOERROR);
-	*scur = NULL;
-	return 0;
-
-out_use_search:
-	xfs_btree_del_cursor(*gcur, XFS_BTREE_NOERROR);
-	*gcur = NULL;
-	return 0;
-
-error0:
-	/* caller invalidates cursors */
-	return error;
-}
-
 /*
  * NEAR allocation algorithm and data structures.
  */
@@ -1438,37 +1343,13 @@ xfs_alloc_ag_vextent_best(
  */
 STATIC int				/* error */
 xfs_alloc_ag_vextent_near(
-	xfs_alloc_arg_t	*args)		/* allocation argument structure */
+	struct xfs_alloc_arg	*args)	/* allocation argument structure */
 {
-	xfs_btree_cur_t	*bno_cur_gt;	/* cursor for bno btree, right side */
-	xfs_btree_cur_t	*bno_cur_lt;	/* cursor for bno btree, left side */
-	xfs_btree_cur_t	*cnt_cur;	/* cursor for count btree */
-	xfs_agblock_t	gtbno;		/* start bno of right side entry */
-	xfs_agblock_t	gtbnoa;		/* aligned ... */
-	xfs_extlen_t	gtdiff;		/* difference to right side entry */
-	xfs_extlen_t	gtlen;		/* length of right side entry */
-	xfs_extlen_t	gtlena;		/* aligned ... */
-	xfs_agblock_t	gtnew;		/* useful start bno of right side */
-	int		error;		/* error code */
-	int		i;		/* result code, temporary */
-	int		j;		/* result code, temporary */
-	xfs_agblock_t	ltbno;		/* start bno of left side entry */
-	xfs_agblock_t	ltbnoa;		/* aligned ... */
-	xfs_extlen_t	ltdiff;		/* difference to left side entry */
-	xfs_extlen_t	ltlen;		/* length of left side entry */
-	xfs_extlen_t	ltlena;		/* aligned ... */
-	xfs_agblock_t	ltnew;		/* useful start bno of left side */
-	xfs_extlen_t	rlen;		/* length of returned extent */
-	bool		busy;
-	unsigned	busy_gen;
-#ifdef DEBUG
-	/*
-	 * Randomly don't execute the first algorithm.
-	 */
-	int		dofirst;	/* set to do first algorithm */
-
-	dofirst = prandom_u32() & 1;
-#endif
+	struct xfs_alloc_best	best = {0,};
+	int			error;		/* error code */
+	int			i;		/* result code, temporary */
+	xfs_agblock_t		bno;	      /* start bno of left side entry */
+	xfs_extlen_t		len;		/* length of left side entry */
 
 	/* handle unitialized agbno range so caller doesn't have to */
 	if (!args->min_agbno && !args->max_agbno)
@@ -1482,353 +1363,37 @@ xfs_alloc_ag_vextent_near(
 		args->agbno = args->max_agbno;
 
 restart:
-	bno_cur_lt = NULL;
-	bno_cur_gt = NULL;
-	ltlen = 0;
-	gtlena = 0;
-	ltlena = 0;
-	busy = false;
-
-	/*
-	 * Get a cursor for the by-size btree.
-	 */
-	cnt_cur = xfs_allocbt_init_cursor(args->mp, args->tp, args->agbp,
-		args->agno, XFS_BTNUM_CNT);
+	/* set up cursors and allocation tracking structure based on args */
+	error = xfs_alloc_best_setup(args, &best);
+	if (error)
+		goto out;
 
-	/*
-	 * See if there are any free extents as big as maxlen.
-	 */
-	if ((error = xfs_alloc_lookup_ge(cnt_cur, 0, args->maxlen, &i)))
-		goto error0;
-	/*
-	 * If none, then pick up the last entry in the tree unless the
-	 * tree is empty.
-	 */
+	error = xfs_alloc_ag_vextent_best(args, &best, &i);
+	if (error)
+		goto out;
 	if (!i) {
-		if ((error = xfs_alloc_ag_vextent_small(args, cnt_cur, &ltbno,
-				&ltlen, &i)))
-			goto error0;
-		if (i == 0 || ltlen == 0) {
-			xfs_btree_del_cursor(cnt_cur, XFS_BTREE_NOERROR);
-			trace_xfs_alloc_near_noentry(args);
-			return 0;
-		}
-		ASSERT(i == 1);
-	}
-	args->wasfromfl = 0;
-
-	/*
-	 * First algorithm.
-	 * If the requested extent is large wrt the freespaces available
-	 * in this a.g., then the cursor will be pointing to a btree entry
-	 * near the right edge of the tree.  If it's in the last btree leaf
-	 * block, then we just examine all the entries in that block
-	 * that are big enough, and pick the best one.
-	 * This is written as a while loop so we can break out of it,
-	 * but we never loop back to the top.
-	 */
-	while (xfs_btree_islastblock(cnt_cur, 0)) {
-		xfs_extlen_t	bdiff;
-		int		besti=0;
-		xfs_extlen_t	blen=0;
-		xfs_agblock_t	bnew=0;
-
-#ifdef DEBUG
-		if (dofirst)
-			break;
-#endif
-		/*
-		 * Start from the entry that lookup found, sequence through
-		 * all larger free blocks.  If we're actually pointing at a
-		 * record smaller than maxlen, go to the start of this block,
-		 * and skip all those smaller than minlen.
-		 */
-		if (ltlen || args->alignment > 1) {
-			cnt_cur->bc_ptrs[0] = 1;
-			do {
-				if ((error = xfs_alloc_get_rec(cnt_cur, &ltbno,
-						&ltlen, &i)))
-					goto error0;
-				XFS_WANT_CORRUPTED_GOTO(args->mp, i == 1, error0);
-				if (ltlen >= args->minlen)
-					break;
-				if ((error = xfs_btree_increment(cnt_cur, 0, &i)))
-					goto error0;
-			} while (i);
-			ASSERT(ltlen >= args->minlen);
-			if (!i)
-				break;
-		}
-		i = cnt_cur->bc_ptrs[0];
-		for (j = 1, blen = 0, bdiff = 0;
-		     !error && j && (blen < args->maxlen || bdiff > 0);
-		     error = xfs_btree_increment(cnt_cur, 0, &j)) {
-			/*
-			 * For each entry, decide if it's better than
-			 * the previous best entry.
-			 */
-			if ((error = xfs_alloc_get_rec(cnt_cur, &ltbno, &ltlen, &i)))
-				goto error0;
-			XFS_WANT_CORRUPTED_GOTO(args->mp, i == 1, error0);
-			busy = xfs_alloc_compute_aligned(args, ltbno, ltlen,
-					&ltbnoa, &ltlena, &busy_gen);
-			if (ltlena < args->minlen)
-				continue;
-			if (ltbnoa < args->min_agbno || ltbnoa > args->max_agbno)
-				continue;
-			args->len = XFS_EXTLEN_MIN(ltlena, args->maxlen);
-			xfs_alloc_fix_len(args);
-			ASSERT(args->len >= args->minlen);
-			if (args->len < blen)
-				continue;
-			ltdiff = xfs_alloc_compute_diff(args->agbno, args->len,
-				args->alignment, args->datatype, ltbnoa,
-				ltlena, &ltnew);
-			if (ltnew != NULLAGBLOCK &&
-			    (args->len > blen || ltdiff < bdiff)) {
-				bdiff = ltdiff;
-				bnew = ltnew;
-				blen = args->len;
-				besti = cnt_cur->bc_ptrs[0];
-			}
+		if (best.busy) {
+			trace_xfs_alloc_near_busy(args);
+			xfs_extent_busy_flush(args->mp, args->pag,
+					      best.busy_gen);
+			goto restart;
 		}
-		/*
-		 * It didn't work.  We COULD be in a case where
-		 * there's a good record somewhere, so try again.
-		 */
-		if (blen == 0)
-			break;
-		/*
-		 * Point at the best entry, and retrieve it again.
-		 */
-		cnt_cur->bc_ptrs[0] = besti;
-		if ((error = xfs_alloc_get_rec(cnt_cur, &ltbno, &ltlen, &i)))
-			goto error0;
-		XFS_WANT_CORRUPTED_GOTO(args->mp, i == 1, error0);
-		ASSERT(ltbno + ltlen <= be32_to_cpu(XFS_BUF_TO_AGF(args->agbp)->agf_length));
-		args->len = blen;
-
-		/*
-		 * We are allocating starting at bnew for blen blocks.
-		 */
-		args->agbno = bnew;
-		ASSERT(bnew >= ltbno);
-		ASSERT(bnew + blen <= ltbno + ltlen);
-		/*
-		 * Set up a cursor for the by-bno tree.
-		 */
-		bno_cur_lt = xfs_allocbt_init_cursor(args->mp, args->tp,
-			args->agbp, args->agno, XFS_BTNUM_BNO);
-		/*
-		 * Fix up the btree entries.
-		 */
-		if ((error = xfs_alloc_fixup_trees(cnt_cur, bno_cur_lt, ltbno,
-				ltlen, bnew, blen, XFSA_FIXUP_CNT_OK)))
-			goto error0;
-		xfs_btree_del_cursor(cnt_cur, XFS_BTREE_NOERROR);
-		xfs_btree_del_cursor(bno_cur_lt, XFS_BTREE_NOERROR);
 
-		trace_xfs_alloc_near_first(args);
-		return 0;
-	}
-	/*
-	 * Second algorithm.
-	 * Search in the by-bno tree to the left and to the right
-	 * simultaneously, until in each case we find a space big enough,
-	 * or run into the edge of the tree.  When we run into the edge,
-	 * we deallocate that cursor.
-	 * If both searches succeed, we compare the two spaces and pick
-	 * the better one.
-	 * With alignment, it's possible for both to fail; the upper
-	 * level algorithm that picks allocation groups for allocations
-	 * is not supposed to do this.
-	 */
-	/*
-	 * Allocate and initialize the cursor for the leftward search.
-	 */
-	bno_cur_lt = xfs_allocbt_init_cursor(args->mp, args->tp, args->agbp,
-		args->agno, XFS_BTNUM_BNO);
-	/*
-	 * Lookup <= bno to find the leftward search's starting point.
-	 */
-	if ((error = xfs_alloc_lookup_le(bno_cur_lt, args->agbno, args->maxlen, &i)))
-		goto error0;
-	if (!i) {
-		/*
-		 * Didn't find anything; use this cursor for the rightward
-		 * search.
-		 */
-		bno_cur_gt = bno_cur_lt;
-		bno_cur_lt = NULL;
-	}
-	/*
-	 * Found something.  Duplicate the cursor for the rightward search.
-	 */
-	else if ((error = xfs_btree_dup_cursor(bno_cur_lt, &bno_cur_gt)))
-		goto error0;
-	/*
-	 * Increment the cursor, so we will point at the entry just right
-	 * of the leftward entry if any, or to the leftmost entry.
-	 */
-	if ((error = xfs_btree_increment(bno_cur_gt, 0, &i)))
-		goto error0;
-	if (!i) {
 		/*
-		 * It failed, there are no rightward entries.
+		 * We get here if we can't satisfy minlen or the trees are empty
+		 * so this call returns an AGFL block or nothing.
 		 */
-		xfs_btree_del_cursor(bno_cur_gt, XFS_BTREE_NOERROR);
-		bno_cur_gt = NULL;
-	}
-	/*
-	 * Loop going left with the leftward cursor, right with the
-	 * rightward cursor, until either both directions give up or
-	 * we find an entry at least as big as minlen.
-	 */
-	do {
-		if (bno_cur_lt) {
-			if ((error = xfs_alloc_get_rec(bno_cur_lt, &ltbno, &ltlen, &i)))
-				goto error0;
-			XFS_WANT_CORRUPTED_GOTO(args->mp, i == 1, error0);
-			busy |= xfs_alloc_compute_aligned(args, ltbno, ltlen,
-					&ltbnoa, &ltlena, &busy_gen);
-			if (ltlena >= args->minlen && ltbnoa >= args->min_agbno)
-				break;
-			if ((error = xfs_btree_decrement(bno_cur_lt, 0, &i)))
-				goto error0;
-			if (!i || ltbnoa < args->min_agbno) {
-				xfs_btree_del_cursor(bno_cur_lt,
-						     XFS_BTREE_NOERROR);
-				bno_cur_lt = NULL;
-			}
-		}
-		if (bno_cur_gt) {
-			if ((error = xfs_alloc_get_rec(bno_cur_gt, &gtbno, &gtlen, &i)))
-				goto error0;
-			XFS_WANT_CORRUPTED_GOTO(args->mp, i == 1, error0);
-			busy |= xfs_alloc_compute_aligned(args, gtbno, gtlen,
-					&gtbnoa, &gtlena, &busy_gen);
-			if (gtlena >= args->minlen && gtbnoa <= args->max_agbno)
-				break;
-			if ((error = xfs_btree_increment(bno_cur_gt, 0, &i)))
-				goto error0;
-			if (!i || gtbnoa > args->max_agbno) {
-				xfs_btree_del_cursor(bno_cur_gt,
-						     XFS_BTREE_NOERROR);
-				bno_cur_gt = NULL;
-			}
-		}
-	} while (bno_cur_lt || bno_cur_gt);
-
-	/*
-	 * Got both cursors still active, need to find better entry.
-	 */
-	if (bno_cur_lt && bno_cur_gt) {
-		if (ltlena >= args->minlen) {
-			/*
-			 * Left side is good, look for a right side entry.
-			 */
-			args->len = XFS_EXTLEN_MIN(ltlena, args->maxlen);
-			xfs_alloc_fix_len(args);
-			ltdiff = xfs_alloc_compute_diff(args->agbno, args->len,
-				args->alignment, args->datatype, ltbnoa,
-				ltlena, &ltnew);
-
-			error = xfs_alloc_find_best_extent(args,
-						&bno_cur_lt, &bno_cur_gt,
-						ltdiff, &gtbno, &gtlen,
-						&gtbnoa, &gtlena,
-						0 /* search right */);
-		} else {
-			ASSERT(gtlena >= args->minlen);
-
-			/*
-			 * Right side is good, look for a left side entry.
-			 */
-			args->len = XFS_EXTLEN_MIN(gtlena, args->maxlen);
-			xfs_alloc_fix_len(args);
-			gtdiff = xfs_alloc_compute_diff(args->agbno, args->len,
-				args->alignment, args->datatype, gtbnoa,
-				gtlena, &gtnew);
-
-			error = xfs_alloc_find_best_extent(args,
-						&bno_cur_gt, &bno_cur_lt,
-						gtdiff, &ltbno, &ltlen,
-						&ltbnoa, &ltlena,
-						1 /* search left */);
-		}
-
+		error = xfs_alloc_ag_vextent_small(args, NULL, &bno, &len, &i);
 		if (error)
-			goto error0;
+			goto out;
+		ASSERT(i == 0 || (i && len == 0));
+		trace_xfs_alloc_near_noentry(args);
 	}
 
-	/*
-	 * If we couldn't get anything, give up.
-	 */
-	if (bno_cur_lt == NULL && bno_cur_gt == NULL) {
-		xfs_btree_del_cursor(cnt_cur, XFS_BTREE_NOERROR);
-
-		if (busy) {
-			trace_xfs_alloc_near_busy(args);
-			xfs_extent_busy_flush(args->mp, args->pag, busy_gen);
-			goto restart;
-		}
-		trace_xfs_alloc_size_neither(args);
-		args->agbno = NULLAGBLOCK;
-		return 0;
-	}
-
-	/*
-	 * At this point we have selected a freespace entry, either to the
-	 * left or to the right.  If it's on the right, copy all the
-	 * useful variables to the "left" set so we only have one
-	 * copy of this code.
-	 */
-	if (bno_cur_gt) {
-		bno_cur_lt = bno_cur_gt;
-		bno_cur_gt = NULL;
-		ltbno = gtbno;
-		ltbnoa = gtbnoa;
-		ltlen = gtlen;
-		ltlena = gtlena;
-		j = 1;
-	} else
-		j = 0;
-
-	/*
-	 * Fix up the length and compute the useful address.
-	 */
-	args->len = XFS_EXTLEN_MIN(ltlena, args->maxlen);
-	xfs_alloc_fix_len(args);
-	rlen = args->len;
-	(void)xfs_alloc_compute_diff(args->agbno, rlen, args->alignment,
-				     args->datatype, ltbnoa, ltlena, &ltnew);
-	ASSERT(ltnew >= ltbno);
-	ASSERT(ltnew + rlen <= ltbnoa + ltlena);
-	ASSERT(ltnew + rlen <= be32_to_cpu(XFS_BUF_TO_AGF(args->agbp)->agf_length));
-	ASSERT(ltnew >= args->min_agbno && ltnew <= args->max_agbno);
-	args->agbno = ltnew;
-
-	if ((error = xfs_alloc_fixup_trees(cnt_cur, bno_cur_lt, ltbno, ltlen,
-			ltnew, rlen, XFSA_FIXUP_BNO_OK)))
-		goto error0;
-
-	if (j)
-		trace_xfs_alloc_near_greater(args);
-	else
-		trace_xfs_alloc_near_lesser(args);
-
-	xfs_btree_del_cursor(cnt_cur, XFS_BTREE_NOERROR);
-	xfs_btree_del_cursor(bno_cur_lt, XFS_BTREE_NOERROR);
-	return 0;
-
- error0:
-	trace_xfs_alloc_near_error(args);
-	if (cnt_cur != NULL)
-		xfs_btree_del_cursor(cnt_cur, XFS_BTREE_ERROR);
-	if (bno_cur_lt != NULL)
-		xfs_btree_del_cursor(bno_cur_lt, XFS_BTREE_ERROR);
-	if (bno_cur_gt != NULL)
-		xfs_btree_del_cursor(bno_cur_gt, XFS_BTREE_ERROR);
+out:
+	xfs_alloc_best_close(&best, error);
+	if (error)
+		trace_xfs_alloc_near_error(args);
 	return error;
 }
 
diff --git a/fs/xfs/xfs_trace.h b/fs/xfs/xfs_trace.h
index 7346646405cd..ed89facfb640 100644
--- a/fs/xfs/xfs_trace.h
+++ b/fs/xfs/xfs_trace.h
@@ -1635,10 +1635,6 @@ DEFINE_EVENT(xfs_alloc_class, name, \
 DEFINE_ALLOC_EVENT(xfs_alloc_exact_done);
 DEFINE_ALLOC_EVENT(xfs_alloc_exact_notfound);
 DEFINE_ALLOC_EVENT(xfs_alloc_exact_error);
-DEFINE_ALLOC_EVENT(xfs_alloc_near_nominleft);
-DEFINE_ALLOC_EVENT(xfs_alloc_near_first);
-DEFINE_ALLOC_EVENT(xfs_alloc_near_greater);
-DEFINE_ALLOC_EVENT(xfs_alloc_near_lesser);
 DEFINE_ALLOC_EVENT(xfs_alloc_near_error);
 DEFINE_ALLOC_EVENT(xfs_alloc_near_noentry);
 DEFINE_ALLOC_EVENT(xfs_alloc_near_busy);
-- 
2.17.2

^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-03-07 17:24 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-07 17:24 [PATCH RFCv2 0/3] XFS near block allocation algorithm prototype Brian Foster
2019-03-07 17:24 ` [PATCH RFCv2 1/3] xfs: refactor small allocation helper to skip cntbt attempt Brian Foster
2019-03-07 17:24 ` [PATCH RFCv2 2/3] xfs: introduce generic extent allocation infrastructure Brian Foster
2019-03-07 17:24 ` [PATCH RFCv2 3/3] xfs: use generic extent alloc mechanism for near mode allocs Brian Foster

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.