From: David R <david@unsolicited.net>
To: Shaohua Li <shli@kernel.org>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>,
NeilBrown <neilb@suse.com>, Shaohua Li <shli@fb.com>,
linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org
Subject: Re: [MD] Crash with 4.12+ kernel and high disk load -- bisected to 4ad23a976413: MD: use per-cpu counter for writes_pending
Date: Tue, 08 Aug 2017 07:01:53 +0000 [thread overview]
Message-ID: <20170808070153.Horde.Py-WCuyuocI5fv5ZKgCwwkc@vinovium.com> (raw)
In-Reply-To: <20170808065526.Horde.9rrpLi3TCaOrBk9KfEU4ZFP@vinovium.com>
Ignore me. The increment and decrement of sync_checkers should protect
switch_to_percpu(). Sigh.
Quoting David R <david@unsolicited.net>:
> Quoting Shaohua Li <shli@kernel.org>:
>
>> Spent some time to check this one, unfortunately I can't find how that patch
>> makes rcu stall. the percpu part looks good to me too. Can you
>> double check if
>> reverting 4ad23a976413aa57 makes the issue go away? When the rcu
>> stall happens,
>> what the /sys/block/md/md0/array_state? please also attach
>> /proc/mdstat. When
>> you say the mdx_raid1 threads are in 'R' state, can you double check if the
>> /proc/pid/stack always 0xffffffffff?
>>
>> Thanks,
>> Shaohua
>
> I confess to knowing absolutely nothing about the md code, so please
> don't be too hard on me. However
> :-
>
> static bool set_in_sync(struct mddev *mddev)
> {
> WARN_ON_ONCE(!spin_is_locked(&mddev->lock));
> if (!mddev->in_sync) {
> mddev->sync_checkers++;
> spin_unlock(&mddev->lock);
> percpu_ref_switch_to_atomic_sync(&mddev->writes_pending);
> spin_lock(&mddev->lock);
> if (!mddev->in_sync &&
> percpu_ref_is_zero(&mddev->writes_pending)) {
> mddev->in_sync = 1;
> /*
> * Ensure ->in_sync is visible before we clear
> * ->sync_checkers.
> */
> smp_mb();
> set_bit(MD_SB_CHANGE_CLEAN, &mddev->sb_flags);
> sysfs_notify_dirent_safe(mddev->sysfs_state);
> }
> if (--mddev->sync_checkers == 0)
> percpu_ref_switch_to_percpu(&mddev->writes_pending);
>
>
> The switch_to_percpu() takes place under mddev->lock however
> switch_to_atomic_sync() does not. A thread can be in the middle of
> (or about to execute) switch_to_atomic_sync() at the same time as
> another is calling switch_to_percpu(). This can't be correct surely?
>
> Cheers
> David
next prev parent reply other threads:[~2017-08-08 7:02 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <ee7921d2-d3ff-2381-1f05-2c07b8001d08@unsolicited.net>
2017-08-06 17:37 ` RESEND: Crash with 4.12.4 kernel and high disk load (monthly RAID 6 check) David R
2017-08-07 11:20 ` [MD] Crash with 4.12+ kernel and high disk load -- bisected to 4ad23a976413: MD: use per-cpu counter for writes_pending Dominik Brodowski
2017-08-08 4:51 ` Shaohua Li
2017-08-08 6:55 ` David R
2017-08-08 7:01 ` David R [this message]
2017-08-08 7:04 ` NeilBrown
2017-08-08 7:10 ` Dominik Brodowski
2017-08-08 7:01 ` NeilBrown
2017-08-08 7:36 ` Dominik Brodowski
2017-08-08 9:06 ` Dominik Brodowski
2017-08-09 6:28 ` David R
2017-08-08 8:02 ` David R
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170808070153.Horde.Py-WCuyuocI5fv5ZKgCwwkc@vinovium.com \
--to=david@unsolicited.net \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-raid@vger.kernel.org \
--cc=linux@dominikbrodowski.net \
--cc=neilb@suse.com \
--cc=shli@fb.com \
--cc=shli@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).