From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752114AbdHHHCW (ORCPT ); Tue, 8 Aug 2017 03:02:22 -0400 Received: from mx1.unsolicited.net ([173.255.193.191]:57181 "EHLO mx1.unsolicited.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751880AbdHHHCP (ORCPT ); Tue, 8 Aug 2017 03:02:15 -0400 Date: Tue, 08 Aug 2017 07:01:53 +0000 Message-ID: <20170808070153.Horde.Py-WCuyuocI5fv5ZKgCwwkc@vinovium.com> From: David R To: Shaohua Li Cc: Dominik Brodowski , NeilBrown , Shaohua Li , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org Subject: Re: [MD] Crash with 4.12+ kernel and high disk load -- bisected to 4ad23a976413: MD: use per-cpu counter for writes_pending References: <20170807112025.GA3094@light.dominikbrodowski.net> <20170808045103.xpi32xkjidjuxczq@kernel.org> <20170808065526.Horde.9rrpLi3TCaOrBk9KfEU4ZFP@vinovium.com> In-Reply-To: <20170808065526.Horde.9rrpLi3TCaOrBk9KfEU4ZFP@vinovium.com> User-Agent: Horde Application Framework 5 Accept-Language: en Content-Type: text/plain; charset=utf-8; format=flowed; DelSp=Yes MIME-Version: 1.0 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Ignore me. The increment and decrement of sync_checkers should protect switch_to_percpu(). Sigh. Quoting David R : > Quoting Shaohua Li : > >> Spent some time to check this one, unfortunately I can't find how that patch >> makes rcu stall. the percpu part looks good to me too. Can you >> double check if >> reverting 4ad23a976413aa57 makes the issue go away? When the rcu >> stall happens, >> what the /sys/block/md/md0/array_state? please also attach >> /proc/mdstat. When >> you say the mdx_raid1 threads are in 'R' state, can you double check if the >> /proc/pid/stack always 0xffffffffff? >> >> Thanks, >> Shaohua > > I confess to knowing absolutely nothing about the md code, so please > don't be too hard on me. However > :- > > static bool set_in_sync(struct mddev *mddev) > { > WARN_ON_ONCE(!spin_is_locked(&mddev->lock)); > if (!mddev->in_sync) { > mddev->sync_checkers++; > spin_unlock(&mddev->lock); > percpu_ref_switch_to_atomic_sync(&mddev->writes_pending); > spin_lock(&mddev->lock); > if (!mddev->in_sync && > percpu_ref_is_zero(&mddev->writes_pending)) { > mddev->in_sync = 1; > /* > * Ensure ->in_sync is visible before we clear > * ->sync_checkers. > */ > smp_mb(); > set_bit(MD_SB_CHANGE_CLEAN, &mddev->sb_flags); > sysfs_notify_dirent_safe(mddev->sysfs_state); > } > if (--mddev->sync_checkers == 0) > percpu_ref_switch_to_percpu(&mddev->writes_pending); > > > The switch_to_percpu() takes place under mddev->lock however > switch_to_atomic_sync() does not. A thread can be in the middle of > (or about to execute) switch_to_atomic_sync() at the same time as > another is calling switch_to_percpu(). This can't be correct surely? > > Cheers > David