From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752174AbdGDMtA (ORCPT ); Tue, 4 Jul 2017 08:49:00 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:48089 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750838AbdGDMs7 (ORCPT ); Tue, 4 Jul 2017 08:48:59 -0400 Date: Tue, 4 Jul 2017 14:48:56 +0200 (CEST) From: Thomas Gleixner To: Michal Hocko cc: LKML , linux-mm@kvack.org, Andrey Ryabinin , Andrew Morton , Vlastimil Babka , Vladimir Davydov , Peter Zijlstra Subject: Re: [patch V2 1/2] mm: swap: Provide lru_add_drain_all_cpuslocked() In-Reply-To: <20170704105803.GK14722@dhcp22.suse.cz> Message-ID: References: <20170704093232.995040438@linutronix.de> <20170704093421.419329357@linutronix.de> <20170704105803.GK14722@dhcp22.suse.cz> User-Agent: Alpine 2.20 (DEB 67 2015-01-07) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 4 Jul 2017, Michal Hocko wrote: > On Tue 04-07-17 11:32:33, Thomas Gleixner wrote: > > The rework of the cpu hotplug locking unearthed potential deadlocks with > > the memory hotplug locking code. > > > > The solution for these is to rework the memory hotplug locking code as well > > and take the cpu hotplug lock before the memory hotplug lock in > > mem_hotplug_begin(), but this will cause a recursive locking of the cpu > > hotplug lock when the memory hotplug code calls lru_add_drain_all(). > > > > Split out the inner workings of lru_add_drain_all() into > > lru_add_drain_all_cpuslocked() so this function can be invoked from the > > memory hotplug code with the cpu hotplug lock held. > > You have added callers in the later patch in the series AFAICS which > is OK but I think it would be better to have them in this patch > already. Nothing earth shattering (maybe a rebase artifact). The requirement for changing that comes with the extra hotplug locking in mem_hotplug_begin(). That is required to establish the proper lock order and then causes the recursive locking in the next patch. Adding the caller here would be wrong, because then lru_add_drain_all_cpuslocked() would be called unprotected. Hens and eggs as usual :) Thanks, tglx