From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932326AbcDNPcd (ORCPT ); Thu, 14 Apr 2016 11:32:33 -0400 Received: from mail-qk0-f194.google.com ([209.85.220.194]:35125 "EHLO mail-qk0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755751AbcDNPca (ORCPT ); Thu, 14 Apr 2016 11:32:30 -0400 Date: Thu, 14 Apr 2016 11:32:27 -0400 From: Tejun Heo To: Michal Hocko Cc: Petr Mladek , cgroups@vger.kernel.org, Cyril Hrubis , linux-kernel@vger.kernel.org, Johannes Weiner Subject: Re: [BUG] cgroup/workques/fork: deadlock when moving cgroups Message-ID: <20160414153227.GA12583@htj.duckdns.org> References: <20160413094216.GC5774@pathway.suse.cz> <20160413183309.GG3676@htj.duckdns.org> <20160413192313.GA30260@dhcp22.suse.cz> <20160413193734.GC20142@htj.duckdns.org> <20160413194820.GC30260@dhcp22.suse.cz> <20160414070623.GC2850@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160414070623.GC2850@dhcp22.suse.cz> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello, On Thu, Apr 14, 2016 at 09:06:23AM +0200, Michal Hocko wrote: > On Wed 13-04-16 21:48:20, Michal Hocko wrote: > [...] > > I was thinking about something like flush_per_cpu_work() which would > > assert on group_threadgroup_rwsem held for write. > > I have thought about this some more and I guess this is not limitted to > per cpu workers. Basically any flush_work with group_threadgroup_rwsem > held for write is dangerous, right? Whether per-cpu or not doesn't matter. What matters is whether the workqueue has WQ_MEM_RECLAIM or not. That said, I think what we want to do is avoiding performing heavy operations in migration path. It's where the core and all controllers have to synchronize, so performing operations with many external dependencies is bound to get messy. I wonder whether memory charge moving can be restructured in a similar fashion to how cpuset node migration is made async. However, given that charge moving has always been a best effort thing, for now, I think it'd be best to drop lru_add_drain. Thanks. -- tejun