All of lore.kernel.org
 help / color / mirror / Atom feed
From: Coly Li <colyli@suse.de>
To: Xiao Ni <xni@redhat.com>
Cc: linux-block@vger.kernel.org, nvdimm@lists.linux.dev,
	linux-raid <linux-raid@vger.kernel.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Geliang Tang <geliang.tang@suse.com>,
	Hannes Reinecke <hare@suse.de>, Jens Axboe <axboe@kernel.dk>,
	NeilBrown <neilb@suse.de>,
	Vishal L Verma <vishal.l.verma@intel.com>
Subject: Re: [PATCH v6 2/7] badblocks: add helper routines for badblock ranges handling
Date: Sat, 12 Aug 2023 00:57:15 +0800	[thread overview]
Message-ID: <C64D4553-13AC-4337-B7DA-8C68B85E0C91@suse.de> (raw)
In-Reply-To: <CALTww2-Y6b+Ruqsux9e2gXSngzGioTwENAFsygj5Rbgipgy0wg@mail.gmail.com>



> 2022年9月21日 20:13,Xiao Ni <xni@redhat.com> 写道:

[snipped]

>> 
>> +/*
>> + * Find the range starts at-or-before bad->start. If 'hint' is provided
>> + * (hint >= 0) then search in the bad table from hint firstly. It is
>> + * very probably the wanted bad range can be found from the hint index,
>> + * then the unnecessary while-loop iteration can be avoided.
>> + */
>> +static int prev_badblocks(struct badblocks *bb, struct badblocks_context *bad,
>> +                         int hint)
>> +{
>> +       sector_t s = bad->start;
>> +       int ret = -1;
>> +       int lo, hi;
>> +       u64 *p;
>> +
>> +       if (!bb->count)
>> +               goto out;
>> +
>> +       if (hint >= 0) {
>> +               ret = prev_by_hint(bb, s, hint);
>> +               if (ret >= 0)
>> +                       goto out;
>> +       }
>> +
>> +       lo = 0;
>> +       hi = bb->count;
>> +       p = bb->page;
> 
> Is it better to check something like this:
> 
> if (BB_OFFSET(p[lo]) > s)
>   return ret;


Yeah, it is worthy to add such check to avoid the following bisect search, if lucky.
Will do it in next version.

> 
>> +
>> +       while (hi - lo > 1) {
>> +               int mid = (lo + hi)/2;
>> +               sector_t a = BB_OFFSET(p[mid]);
>> +
>> +               if (a == s) {
>> +                       ret = mid;
>> +                       goto out;
>> +               }
>> +
>> +               if (a < s)
>> +                       lo = mid;
>> +               else
>> +                       hi = mid;
>> +       }
>> +
>> +       if (BB_OFFSET(p[lo]) <= s)
>> +               ret = lo;
>> +out:
>> +       return ret;
>> +}
>> +

[snipped]

>> 
>> +/*
>> + * Combine the bad ranges indexed by 'prev' and 'prev - 1' (from bad
>> + * table) into one larger bad range, and the new range is indexed by
>> + * 'prev - 1'.
>> + */
>> +static void front_combine(struct badblocks *bb, int prev)
>> +{
>> +       u64 *p = bb->page;
>> +
>> +       p[prev - 1] = BB_MAKE(BB_OFFSET(p[prev - 1]),
>> +                             BB_LEN(p[prev - 1]) + BB_LEN(p[prev]),
>> +                             BB_ACK(p[prev]));
>> +       if ((prev + 1) < bb->count)
>> +               memmove(p + prev, p + prev + 1, (bb->count - prev - 1) * 8);
>            else
>                    p[prev] = 0;

The caller of front_combine() will decrease bb->count by 1, so clearing p[prev] here can be avoided. I will add code comments of front_combine to explain this.
Thanks.

[snipped]

>> +/*
>> + * Return 'true' if the range indicated by 'bad' can overwrite the bad
>> + * range (from bad table) indexed by 'prev'.
>> + *
>> + * The range indicated by 'bad' can overwrite the bad range indexed by
>> + * 'prev' when,
>> + * 1) The whole range indicated by 'bad' can cover partial or whole bad
>> + *    range (from bad table) indexed by 'prev'.
>> + * 2) The ack value of 'bad' is larger or equal to the ack value of bad
>> + *    range 'prev'.
> 
> In fact, it can overwrite only the ack value of 'bad' is larger than
> the ack value of the bad range 'prev'.
> If the ack values are equal, it should do a merge operation.

Yes you are right, if extra is 0, it is indeed a merge operation. And if extra is 1, or 2, it means bad blocks range split, I name such situation as overwrite.

[snipped]


>> +/*
>> + * Do the overwrite from the range indicated by 'bad' to the bad range
>> + * (from bad table) indexed by 'prev'.
>> + * The previously called can_front_overwrite() will provide how many
>> + * extra bad range(s) might be split and added into the bad table. All
>> + * the splitting cases in the bad table will be handled here.
>> + */
>> +static int front_overwrite(struct badblocks *bb, int prev,
>> +                          struct badblocks_context *bad, int extra)
>> +{
>> +       u64 *p = bb->page;
>> +       sector_t orig_end = BB_END(p[prev]);
>> +       int orig_ack = BB_ACK(p[prev]);
>> +
>> +       switch (extra) {
>> +       case 0:
>> +               p[prev] = BB_MAKE(BB_OFFSET(p[prev]), BB_LEN(p[prev]),
>> +                                 bad->ack);
>> +               break;
>> +       case 1:
>> +               if (BB_OFFSET(p[prev]) == bad->start) {
>> +                       p[prev] = BB_MAKE(BB_OFFSET(p[prev]),
>> +                                         bad->len, bad->ack);
>> +                       memmove(p + prev + 2, p + prev + 1,
>> +                               (bb->count - prev - 1) * 8);
>> +                       p[prev + 1] = BB_MAKE(bad->start + bad->len,
>> +                                             orig_end - BB_END(p[prev]),
>> +                                             orig_ack);
>> +               } else {
>> +                       p[prev] = BB_MAKE(BB_OFFSET(p[prev]),
>> +                                         bad->start - BB_OFFSET(p[prev]),
>> +                                         BB_ACK(p[prev]));
> 
> s/BB_ACK(p[prev])/orig_ack/g

Yeah, this one is better. I will use it in next version.


>> +                       /*
>> +                        * prev +2 -> prev + 1 + 1, which is for,
>> +                        * 1) prev + 1: the slot index of the previous one
>> +                        * 2) + 1: one more slot for extra being 1.
>> +                        */
>> +                       memmove(p + prev + 2, p + prev + 1,
>> +                               (bb->count - prev - 1) * 8);
>> +                       p[prev + 1] = BB_MAKE(bad->start, bad->len, bad->ack);
>> +               }
>> +               break;
>> +       case 2:
>> +               p[prev] = BB_MAKE(BB_OFFSET(p[prev]),
>> +                                 bad->start - BB_OFFSET(p[prev]),
>> +                                 BB_ACK(p[prev]));
> 
> s/BB_ACK(p[prev])/orig_ack/g

It will be used in next version.

> 
>> +               /*
>> +                * prev + 3 -> prev + 1 + 2, which is for,
>> +                * 1) prev + 1: the slot index of the previous one
>> +                * 2) + 2: two more slots for extra being 2.
>> +                */
>> +               memmove(p + prev + 3, p + prev + 1,
>> +                       (bb->count - prev - 1) * 8);
>> +               p[prev + 1] = BB_MAKE(bad->start, bad->len, bad->ack);
>> +               p[prev + 2] = BB_MAKE(BB_END(p[prev + 1]),
>> +                                     orig_end - BB_END(p[prev + 1]),
>> +                                     BB_ACK(p[prev]));
> 
> s/BB_ACK(p[prev])/orig_ack/g


It will be used in next version.


>> +               break;
>> +       default:
>> +               break;
>> +       }
>> +
>> +       return bad->len;
>> +}
>> +
>> +/*

Thank you for the review!

Coly Li


  reply	other threads:[~2023-08-11 16:57 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-21 12:11 [PATCH v6 0/7] badblocks improvement for multiple bad block ranges Coly Li
2022-07-21 12:11 ` [PATCH v6 1/7] badblocks: add more helper structure and routines in badblocks.h Coly Li
2022-09-21 12:16   ` Xiao Ni
2022-07-21 12:11 ` [PATCH v6 2/7] badblocks: add helper routines for badblock ranges handling Coly Li
2022-09-21 12:13   ` Xiao Ni
2023-08-11 16:57     ` Coly Li [this message]
2022-07-21 12:11 ` [PATCH v6 3/7] badblocks: improve badblocks_set() for multiple " Coly Li
2022-09-21 12:13   ` Xiao Ni
2023-08-11 16:57     ` Coly Li
2022-07-21 12:11 ` [PATCH v6 4/7] badblocks: improve badblocks_clear() " Coly Li
2022-09-21 15:26   ` Xiao Ni
2023-08-11 16:57     ` Coly Li
2023-08-11 16:58     ` Coly Li
2022-07-21 12:11 ` [PATCH v6 5/7] badblocks: improve badblocks_check() " Coly Li
2022-09-21 16:33   ` Xiao Ni
2022-09-23 15:17     ` 3507712
2022-10-28 17:50   ` Lynn.Figore
2022-12-20 13:25   ` Joy.Palmer
2023-03-21 11:17   ` Robert.Nocerino
2022-07-21 12:11 ` [PATCH v6 6/7] badblocks: switch to the improved badblock handling code Coly Li
2022-07-21 12:11 ` [PATCH v6 7/7] test: user space code to test badblocks APIs Coly Li
2023-05-23  2:38 ` [PATCH v6 0/7] badblocks improvement for multiple bad block ranges Li Nan
2023-05-23  4:38   ` Coly Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=C64D4553-13AC-4337-B7DA-8C68B85E0C91@suse.de \
    --to=colyli@suse.de \
    --cc=axboe@kernel.dk \
    --cc=dan.j.williams@intel.com \
    --cc=geliang.tang@suse.com \
    --cc=hare@suse.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=neilb@suse.de \
    --cc=nvdimm@lists.linux.dev \
    --cc=vishal.l.verma@intel.com \
    --cc=xni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.