From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752868AbcD1ISI (ORCPT ); Thu, 28 Apr 2016 04:18:08 -0400 Received: from mail-wm0-f68.google.com ([74.125.82.68]:36202 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751539AbcD1ISC (ORCPT ); Thu, 28 Apr 2016 04:18:02 -0400 Date: Thu, 28 Apr 2016 10:17:59 +0200 From: Michal Hocko To: Dave Chinner Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Andrew Morton , Jan Kara , xfs@oss.sgi.com, LKML Subject: Re: [PATCH 2/2] mm, debug: report when GFP_NO{FS,IO} is used explicitly from memalloc_no{fs,io}_{save,restore} context Message-ID: <20160428081759.GA31489@dhcp22.suse.cz> References: <1461671772-1269-1-git-send-email-mhocko@kernel.org> <1461671772-1269-3-git-send-email-mhocko@kernel.org> <20160426225845.GF26977@dastard> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160426225845.GF26977@dastard> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org [Trim the CC list] On Wed 27-04-16 08:58:45, Dave Chinner wrote: [...] > Often these are to silence lockdep warnings (e.g. commit b17cb36 > ("xfs: fix missing KM_NOFS tags to keep lockdep happy")) because > lockdep gets very unhappy about the same functions being called with > different reclaim contexts. e.g. directory block mapping might > occur from readdir (no transaction context) or within transactions > (create/unlink). hence paths like this are tagged with GFP_NOFS to > stop lockdep emitting false positive warnings.... As already said in other email, I have tried to revert the above commit and tried to run it with some fs workloads but didn't manage to hit any lockdep splats (after I fixed my bug in the patch 1.2). I have tried to find reports which led to this commit but didn't succeed much. Everything is from much earlier or later. Do you happen to remember which loads triggered them, what they looked like or have an idea what to try to reproduce them? So far I was trying heavy parallel fs_mark, kernbench inside a tiny virtual machine so any of those have triggered direct reclaim all the time. Thanks! -- Michal Hocko SUSE Labs