nvdimm.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Geliang Tang <geliang.tang@suse.com>
To: Coly Li <colyli@suse.de>
Cc: nvdimm@lists.linux.dev, linux-block@vger.kernel.org,
	linux-raid@vger.kernel.org, Hannes Reinecke <hare@suse.de>,
	Jens Axboe <axboe@kernel.dk>, NeilBrown <neilb@suse.de>,
	Vishal L Verma <vishal.l.verma@intel.com>
Subject: Re: [PATCH v3 2/6] badblocks: add helper routines for badblock ranges handling
Date: Thu, 9 Dec 2021 15:22:51 +0800	[thread overview]
Message-ID: <20211209072251.GA26976@dhcp-10-157-36-190> (raw)
In-Reply-To: <20211202125245.76699-3-colyli@suse.de>

Hi Coly,

Thanks for this new version!

On Thu, Dec 02, 2021 at 08:52:40PM +0800, Coly Li wrote:
> This patch adds several helper routines to improve badblock ranges
> handling. These helper routines will be used later in the improved
> version of badblocks_set()/badblocks_clear()/badblocks_check().
> 
> - Helpers prev_by_hint() and prev_badblocks() are used to find the bad
>   range from bad table which the searching range starts at or after.
> 
> - The following helpers are to decide the relative layout between the
>   manipulating range and existing bad block range from bad table.
>   - can_merge_behind()
>     Return 'true' if the manipulating range can backward merge with the
>     bad block range.
>   - can_merge_front()
>     Return 'true' if the manipulating range can forward merge with the
>     bad block range.
>   - can_combine_front()
>     Return 'true' if two adjacent bad block ranges before the
>     manipulating range can be merged.
>   - overlap_front()
>     Return 'true' if the manipulating range exactly overlaps with the
>     bad block range in front of its range.
>   - overlap_behind()
>     Return 'true' if the manipulating range exactly overlaps with the
>     bad block range behind its range.
>   - can_front_overwrite()
>     Return 'true' if the manipulating range can forward overwrite the
>     bad block range in front of its range.
> 
> - The following helpers are to add the manipulating range into the bad
>   block table. Different routine is called with the specific relative
>   layout between the maniplating range and other bad block range in the
>   bad block table.
>   - behind_merge()
>     Merge the maniplating range with the bad block range behind its
>     range, and return the number of merged length in unit of sector.
>   - front_merge()
>     Merge the maniplating range with the bad block range in front of
>     its range, and return the number of merged length in unit of sector.
>   - front_combine()
>     Combine the two adjacent bad block ranges before the manipulating
>     range into a larger one.
>   - front_overwrite()
>     Overwrite partial of whole bad block range which is in front of the
>     manipulating range. The overwrite may split existing bad block range
>     and generate more bad block ranges into the bad block table.
>   - insert_at()
>     Insert the manipulating range at a specific location in the bad
>     block table.
> 
> All the above helpers are used in later patches to improve the bad block
> ranges handling for badblocks_set()/badblocks_clear()/badblocks_check().
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Geliang Tang <geliang.tang@suse.com>
> Cc: Hannes Reinecke <hare@suse.de>
> Cc: Jens Axboe <axboe@kernel.dk>
> Cc: NeilBrown <neilb@suse.de>
> Cc: Vishal L Verma <vishal.l.verma@intel.com>
> ---
>  block/badblocks.c | 374 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 374 insertions(+)
> 
> diff --git a/block/badblocks.c b/block/badblocks.c
> index d39056630d9c..e216c6791b4b 100644
> --- a/block/badblocks.c
> +++ b/block/badblocks.c
> @@ -16,6 +16,380 @@
>  #include <linux/types.h>
>  #include <linux/slab.h>
>  
> +/*
> + * Find the range starts at-or-before 's' from bad table. The search
> + * starts from index 'hint' and stops at index 'hint_end' from the bad
> + * table.
> + */
> +static int prev_by_hint(struct badblocks *bb, sector_t s, int hint)
> +{
> +	int hint_end = hint + 2;
> +	u64 *p = bb->page;
> +	int ret = -1;
> +
> +	while ((hint < hint_end) && ((hint + 1) <= bb->count) &&
> +	       (BB_OFFSET(p[hint]) <= s)) {
> +		if ((hint + 1) == bb->count || BB_OFFSET(p[hint + 1]) > s) {
> +			ret = hint;
> +			break;
> +		}
> +		hint++;
> +	}
> +
> +	return ret;
> +}
> +
> +/*
> + * Find the range starts at-or-before bad->start. If 'hint' is provided
> + * (hint >= 0) then search in the bad table from hint firstly. It is
> + * very probably the wanted bad range can be found from the hint index,
> + * then the unnecessary while-loop iteration can be avoided.
> + */
> +static int prev_badblocks(struct badblocks *bb, struct badblocks_context *bad,
> +			  int hint)
> +{
> +	sector_t s = bad->start;
> +	int ret = -1;
> +	int lo, hi;
> +	u64 *p;
> +
> +	if (!bb->count)
> +		goto out;
> +
> +	if (hint >= 0) {
> +		ret = prev_by_hint(bb, s, hint);
> +		if (ret >= 0)
> +			goto out;
> +	}
> +
> +	lo = 0;
> +	hi = bb->count;
> +	p = bb->page;
> +
> +	while (hi - lo > 1) {
> +		int mid = (lo + hi)/2;
> +		sector_t a = BB_OFFSET(p[mid]);
> +
> +		if (a <= s)
> +			lo = mid;
> +		else
> +			hi = mid;
> +	}
> +
> +	if (BB_OFFSET(p[lo]) <= s)
> +		ret = lo;
> +out:
> +	return ret;
> +}
> +
> +/*
> + * Return 'true' if the range indicated by 'bad' can be backward merged
> + * with the bad range (from the bad table) index by 'behind'.
> + */
> +static bool can_merge_behind(struct badblocks *bb, struct badblocks_context *bad,
> +			     int behind)
> +{
> +	sector_t sectors = bad->len;
> +	sector_t s = bad->start;
> +	int ack = bad->ack;

The local variables 'sectors' and 'ack' only be used once in this
fuction, how about using 'bad->len' and 'bad->ack' directly.

> +	u64 *p = bb->page;
> +
> +	if ((s <= BB_OFFSET(p[behind])) &&
> +	    ((s + sectors) >= BB_OFFSET(p[behind])) &&
> +	    ((BB_END(p[behind]) - s) <= BB_MAX_LEN) &&
> +	    BB_ACK(p[behind]) == ack)
> +		return true;
> +	return false;
> +}
> +
> +/*
> + * Do backward merge for range indicated by 'bad' and the bad range
> + * (from the bad table) indexed by 'behind'. The return value is merged
> + * sectors from bad->len.
> + */
> +static int behind_merge(struct badblocks *bb, struct badblocks_context *bad,
> +			int behind)
> +{
> +	sector_t sectors = bad->len;
> +	sector_t s = bad->start;
> +	int ack = bad->ack;

The local variable 'ack' can be dropped too.

> +	u64 *p = bb->page;
> +	int merged = 0;
> +
> +	WARN_ON(s > BB_OFFSET(p[behind]));
> +	WARN_ON((s + sectors) < BB_OFFSET(p[behind]));
> +
> +	if (s < BB_OFFSET(p[behind])) {
> +		WARN_ON((BB_LEN(p[behind]) + merged) >= BB_MAX_LEN);
> +
> +		merged = min_t(sector_t, sectors, BB_OFFSET(p[behind]) - s);
> +		p[behind] =  BB_MAKE(s, BB_LEN(p[behind]) + merged, ack);
> +	} else {
> +		merged = min_t(sector_t, sectors, BB_LEN(p[behind]));
> +	}
> +
> +	WARN_ON(merged == 0);
> +
> +	return merged;
> +}
> +
> +/*
> + * Return 'true' if the range indicated by 'bad' can be forward
> + * merged with the bad range (from the bad table) indexed by 'prev'.
> + */
> +static bool can_merge_front(struct badblocks *bb, int prev,
> +			    struct badblocks_context *bad)
> +{
> +	sector_t s = bad->start;
> +	int ack = bad->ack;

The local variable 'ack' here can be dropped too.

> +	u64 *p = bb->page;
> +
> +	if (BB_ACK(p[prev]) == ack &&
> +	    (s < BB_END(p[prev]) ||
> +	     (s == BB_END(p[prev]) && (BB_LEN(p[prev]) < BB_MAX_LEN))))
> +		return true;
> +	return false;
> +}
> +
> +/*
> + * Do forward merge for range indicated by 'bad' and the bad range
> + * (from bad table) indexed by 'prev'. The return value is sectors
> + * merged from bad->len.
> + */
> +static int front_merge(struct badblocks *bb, int prev, struct badblocks_context *bad)
> +{
> +	sector_t sectors = bad->len;
> +	sector_t s = bad->start;
> +	int ack = bad->ack;

The local variable 'ack' here can be dropped too.

> +	u64 *p = bb->page;
> +	int merged = 0;
> +
> +	WARN_ON(s > BB_END(p[prev]));
> +
> +	if (s < BB_END(p[prev])) {
> +		merged = min_t(sector_t, sectors, BB_END(p[prev]) - s);
> +	} else {
> +		merged = min_t(sector_t, sectors, BB_MAX_LEN - BB_LEN(p[prev]));
> +		if ((prev + 1) < bb->count &&
> +		    merged > (BB_OFFSET(p[prev + 1]) - BB_END(p[prev]))) {
> +			merged = BB_OFFSET(p[prev + 1]) - BB_END(p[prev]);
> +		}
> +
> +		p[prev] = BB_MAKE(BB_OFFSET(p[prev]),
> +				  BB_LEN(p[prev]) + merged, ack);
> +	}
> +
> +	return merged;
> +}
> +
> +/*
> + * 'Combine' is a special case which can_merge_front() is not able to
> + * handle: If a bad range (indexed by 'prev' from bad table) exactly
> + * starts as bad->start, and the bad range ahead of 'prev' (indexed by
> + * 'prev - 1' from bad table) exactly ends at where 'prev' starts, and
> + * the sum of their lengths does not exceed BB_MAX_LEN limitation, then
> + * these two bad range (from bad table) can be combined.
> + *
> + * Return 'true' if bad ranges indexed by 'prev' and 'prev - 1' from bad
> + * table can be combined.
> + */
> +static bool can_combine_front(struct badblocks *bb, int prev,
> +			      struct badblocks_context *bad)
> +{
> +	u64 *p = bb->page;
> +
> +	if ((prev > 0) &&
> +	    (BB_OFFSET(p[prev]) == bad->start) &&
> +	    (BB_END(p[prev - 1]) == BB_OFFSET(p[prev])) &&
> +	    (BB_LEN(p[prev - 1]) + BB_LEN(p[prev]) <= BB_MAX_LEN) &&
> +	    (BB_ACK(p[prev - 1]) == BB_ACK(p[prev])))
> +		return true;
> +	return false;
> +}
> +
> +/*
> + * Combine the bad ranges indexed by 'prev' and 'prev - 1' (from bad
> + * table) into one larger bad range, and the new range is indexed by
> + * 'prev - 1'.
> + */
> +static void front_combine(struct badblocks *bb, int prev)
> +{
> +	u64 *p = bb->page;
> +
> +	p[prev - 1] = BB_MAKE(BB_OFFSET(p[prev - 1]),
> +			      BB_LEN(p[prev - 1]) + BB_LEN(p[prev]),
> +			      BB_ACK(p[prev]));
> +	if ((prev + 1) < bb->count)
> +		memmove(p + prev, p + prev + 1, (bb->count - prev - 1) * 8);
> +}
> +
> +/*
> + * Return 'true' if the range indicated by 'bad' is exactly forward
> + * overlapped with the bad range (from bad table) indexed by 'front'.
> + * Exactly forward overlap means the bad range (from bad table) indexed
> + * by 'prev' does not cover the whole range indicated by 'bad'.
> + */
> +static bool overlap_front(struct badblocks *bb, int front,
> +			  struct badblocks_context *bad)
> +{
> +	u64 *p = bb->page;
> +
> +	if (bad->start >= BB_OFFSET(p[front]) &&
> +	    bad->start < BB_END(p[front]))
> +		return true;
> +	return false;
> +}
> +
> +/*
> + * Return 'true' if the range indicated by 'bad' is exactly backward
> + * overlapped with the bad range (from bad table) indexed by 'behind'.
> + */
> +static bool overlap_behind(struct badblocks *bb, struct badblocks_context *bad,
> +			   int behind)
> +{
> +	u64 *p = bb->page;
> +
> +	if (bad->start < BB_OFFSET(p[behind]) &&
> +	    (bad->start + bad->len) > BB_OFFSET(p[behind]))
> +		return true;
> +	return false;
> +}
> +
> +/*
> + * Return 'true' if the range indicated by 'bad' can overwrite the bad
> + * range (from bad table) indexed by 'prev'.
> + *
> + * The range indicated by 'bad' can overwrite the bad range indexed by
> + * 'prev' when,
> + * 1) The whole range indicated by 'bad' can cover partial or whole bad
> + *    range (from bad table) indexed by 'prev'.
> + * 2) The ack value of 'bad' is larger or equal to the ack value of bad
> + *    range 'prev'.
> + *
> + * If the overwriting doesn't cover the whole bad range (from bad table)
> + * indexed by 'prev', new range might be split from existing bad range,
> + * 1) The overwrite covers head or tail part of existing bad range, 1
> + *    extra bad range will be split and added into the bad table.
> + * 2) The overwrite covers middle of existing bad range, 2 extra bad
> + *    ranges will be split (ahead and after the overwritten range) and
> + *    added into the bad table.
> + * The number of extra split ranges of the overwriting is stored in
> + * 'extra' and returned for the caller.
> + */
> +static bool can_front_overwrite(struct badblocks *bb, int prev,
> +				struct badblocks_context *bad, int *extra)
> +{
> +	u64 *p = bb->page;
> +	int len;
> +
> +	WARN_ON(!overlap_front(bb, prev, bad));
> +
> +	if (BB_ACK(p[prev]) >= bad->ack)
> +		return false;
> +
> +	if (BB_END(p[prev]) <= (bad->start + bad->len)) {
> +		len = BB_END(p[prev]) - bad->start;
> +		if (BB_OFFSET(p[prev]) == bad->start)
> +			*extra = 0;
> +		else
> +			*extra = 1;
> +
> +		bad->len = len;
> +	} else {
> +		if (BB_OFFSET(p[prev]) == bad->start)
> +			*extra = 1;
> +		else
> +		/*
> +		 * prev range will be split into two, beside the overwritten
> +		 * one, an extra slot needed from bad table.
> +		 */
> +			*extra = 2;
> +	}
> +
> +	if ((bb->count + (*extra)) >= MAX_BADBLOCKS)
> +		return false;
> +
> +	return true;
> +}
> +
> +/*
> + * Do the overwrite from the range indicated by 'bad' to the bad range
> + * (from bad table) indexed by 'prev'.
> + * The previously called can_front_overwrite() will provide how many
> + * extra bad range(s) might be split and added into the bad table. All
> + * the splitting cases in the bad table will be handled here.
> + */
> +static int front_overwrite(struct badblocks *bb, int prev,
> +			   struct badblocks_context *bad, int extra)
> +{
> +	u64 *p = bb->page;
> +	sector_t orig_end = BB_END(p[prev]);
> +	int orig_ack = BB_ACK(p[prev]);

How about renaming 'orig_end' to 'end', renaming 'orig_ack' to 'ack'?

> +	int n = extra;

I think no need to add this local variable 'n' here, use 1 or 2
instead.

> +
> +	switch (extra) {
> +	case 0:
> +		p[prev] = BB_MAKE(BB_OFFSET(p[prev]), BB_LEN(p[prev]),
> +				  bad->ack);
> +		break;
> +	case 1:
> +		if (BB_OFFSET(p[prev]) == bad->start) {
> +			p[prev] = BB_MAKE(BB_OFFSET(p[prev]),
> +					  bad->len, bad->ack);
> +			memmove(p + prev + 2, p + prev + 1,
> +				(bb->count - prev - 1) * 8);
> +			p[prev + 1] = BB_MAKE(bad->start + bad->len,
> +					      orig_end - BB_END(p[prev]),
> +					      orig_ack);
> +		} else {
> +			p[prev] = BB_MAKE(BB_OFFSET(p[prev]),
> +					  bad->start - BB_OFFSET(p[prev]),
> +					  BB_ACK(p[prev]));
> +			memmove(p + prev + 1 + n, p + prev + 1,

Here use 'p + prev + 2' instead of 'p + prev + 1 + n',

> +				(bb->count - prev - 1) * 8);
> +			p[prev + 1] = BB_MAKE(bad->start, bad->len, bad->ack);
> +		}
> +		break;
> +	case 2:
> +		p[prev] = BB_MAKE(BB_OFFSET(p[prev]),
> +				  bad->start - BB_OFFSET(p[prev]),
> +				  BB_ACK(p[prev]));
> +		memmove(p + prev + 1 + n, p + prev + 1,

And here use 'p + prev + 3' instead of 'p + prev + 1 + n'.

> +			(bb->count - prev - 1) * 8);
> +		p[prev + 1] = BB_MAKE(bad->start, bad->len, bad->ack);
> +		p[prev + 2] = BB_MAKE(BB_END(p[prev + 1]),
> +				      orig_end - BB_END(p[prev + 1]),
> +				      BB_ACK(p[prev]));
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return bad->len;
> +}
> +
> +/*
> + * Explicitly insert a range indicated by 'bad' to the bad table, where
> + * the location is indexed by 'at'.
> + */
> +static int insert_at(struct badblocks *bb, int at, struct badblocks_context *bad)
> +{
> +	sector_t sectors = bad->len;
> +	sector_t s = bad->start;
> +	int ack = bad->ack;

The local variables 'sectors', 's' and 'ack' can be dropped here.

> +	u64 *p = bb->page;
> +	int len;
> +
> +	WARN_ON(badblocks_full(bb));
> +
> +	len = min_t(sector_t, sectors, BB_MAX_LEN);
> +	if (at < bb->count)
> +		memmove(p + at + 1, p + at, (bb->count - at) * 8);
> +	p[at] = BB_MAKE(s, len, ack);
> +
> +	return len;
> +}
> +
>  /**
>   * badblocks_check() - check a given range for bad sectors
>   * @bb:		the badblocks structure that holds all badblock information
> -- 
> 2.31.1
> 


  reply	other threads:[~2021-12-09  7:22 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-02 12:52 [PATCH v4 0/6] badblocks improvement for multiple bad block ranges Coly Li
2021-12-02 12:52 ` [PATCH v3 1/6] badblocks: add more helper structure and routines in badblocks.h Coly Li
2021-12-02 12:52 ` [PATCH v3 2/6] badblocks: add helper routines for badblock ranges handling Coly Li
2021-12-09  7:22   ` Geliang Tang [this message]
2021-12-10  6:57     ` Coly Li
2021-12-02 12:52 ` [PATCH v3 3/6] badblocks: improvement badblocks_set() for multiple " Coly Li
2021-12-09  7:28   ` Geliang Tang
2021-12-09 18:37     ` Wols Lists
2021-12-10  6:55       ` Coly Li
2021-12-02 12:52 ` [PATCH v3 4/6] badblocks: improve badblocks_clear() " Coly Li
2021-12-09  7:35   ` Geliang Tang
2021-12-02 12:52 ` [PATCH v3 5/6] badblocks: improve badblocks_check() " Coly Li
2021-12-02 12:52 ` [PATCH v3 6/6] badblocks: switch to the improved badblock handling code Coly Li
2021-12-02 12:52 ` [PATCH] test: user space code to test badblocks APIs Coly Li
  -- strict thread matches above, loose matches on Subject: below --
2021-09-13 16:36 [PATCH v3 0/7] badblocks improvement for multiple bad block ranges Coly Li
2021-09-13 16:36 ` [PATCH v3 2/6] badblocks: add helper routines for badblock ranges handling Coly Li
2021-09-27  7:25   ` Geliang Tang
2021-09-27  8:17     ` Coly Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211209072251.GA26976@dhcp-10-157-36-190 \
    --to=geliang.tang@suse.com \
    --cc=axboe@kernel.dk \
    --cc=colyli@suse.de \
    --cc=hare@suse.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=neilb@suse.de \
    --cc=nvdimm@lists.linux.dev \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).