From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17C4AC4332F for ; Sat, 17 Dec 2022 02:59:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229949AbiLQC7W (ORCPT ); Fri, 16 Dec 2022 21:59:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229453AbiLQC7V (ORCPT ); Fri, 16 Dec 2022 21:59:21 -0500 Received: from mail-qv1-xf36.google.com (mail-qv1-xf36.google.com [IPv6:2607:f8b0:4864:20::f36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92D561A05E; Fri, 16 Dec 2022 18:59:20 -0800 (PST) Received: by mail-qv1-xf36.google.com with SMTP id c14so2881316qvq.0; Fri, 16 Dec 2022 18:59:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:feedback-id:from:to:cc:subject:date :message-id:reply-to; bh=rtschH8KK2zLO2/6Dnx0kwtWLWL6zEaxhg3WAhEmZ9Q=; b=Zt61PqYqtSFKHMnj+eu2uzug4lAHY2q6hOFD5HwLOXGcH2Uv6VwelUY001Avabdo0f 4RI7hLnV5/gjQNLS7VXclTjSVstwy6A7O25ppubrLLgjwULnMq0V7umoIVq9x2Gozj6k Z1/C5+KucVBK28QMREooxzGQg/6PU+aWsR8yraMRrUYns4HufuXTzwu11mnAvW9CEttt TpLGOFGMz3TTFmRFdMe7jav+D48i7TNYVBC9z4hjq9ZVayqQgCnP5KTVn3ICsbB1zsX9 PJOgF5hiS54YUwR3jrjY0MTvqHAQkUd1K4IGes9A+OlcysFF7B2XWAklPJMcAWbeA2TC zPcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:feedback-id:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rtschH8KK2zLO2/6Dnx0kwtWLWL6zEaxhg3WAhEmZ9Q=; b=v2zJVtuQ1xRkeqizsmefgmdv9DvLnR38rk9eH1cuKFYYp1L2YuL6B5F85UuLJhU0JQ Em+ahDrNr81SLVydQn04aJHRQAz3j7VWdjNUAMpLQ2nW34pLSsW2+skj3Lt284hgSm2t qNLe0FoY7pMVhJ1kBm4ssByHjclHodZ/JLWrexwXVRc8i+D/m0zmegYJM4npl1V77z9i ihfS8gY603urWJdy7LTGCoCEAl5uwBfLZekibWzfk4W7ne4PGlioO7WGskkd+h0ffXIh yXp6lVwWnLhKiSFZzTbN/6FWDZcqoNM2+UpYp6rNPSwTMpd6ClfBmDW4Lonb4c/+x7V2 G9mQ== X-Gm-Message-State: ANoB5pm/Vg4Jg4vf4s790IPCM5RpgQQMkriol6pQMW7k0SMz9cBegRf1 UHvtk0DEvQJSp5r97jTqaoE= X-Google-Smtp-Source: AA0mqf5IADHg7C5/KviKYiM1STMfiRryPOl5+A4HORUt0z5oy0Qt13cBNND8t3pGvi5pZsgYyeKDRA== X-Received: by 2002:a05:6214:5f0f:b0:4c7:602d:7e29 with SMTP id lx15-20020a0562145f0f00b004c7602d7e29mr46271082qvb.45.1671245959536; Fri, 16 Dec 2022 18:59:19 -0800 (PST) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id d16-20020a05620a241000b006ec62032d3dsm2868856qkn.30.2022.12.16.18.59.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 18:59:19 -0800 (PST) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailauth.nyi.internal (Postfix) with ESMTP id 7525327C0054; Fri, 16 Dec 2022 21:59:18 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Fri, 16 Dec 2022 21:59:18 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekgdehudcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpeffhffvvefukfhfgggtuggjsehttdertddttddvnecuhfhrohhmpeeuohhquhhn ucfhvghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrth htvghrnhephedugfduffffteeutddvheeuveelvdfhleelieevtdeguefhgeeuveeiudff iedvnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsg hoqhhunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieeg qddujeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigi hmvgdrnhgrmhgv X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 16 Dec 2022 21:59:17 -0500 (EST) Date: Fri, 16 Dec 2022 18:59:15 -0800 From: Boqun Feng To: Linus Torvalds Cc: Waiman Long , Al Viro , Damien Le Moal , Wei Chen , linux-ide@vger.kernel.org, linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com, syzbot , linux-fsdevel , Chuck Lever , Jeff Layton , Peter Zijlstra Subject: Re: possible deadlock in __ata_sff_interrupt Message-ID: References: <5eff70b8-04fc-ee87-973a-2099a65f6e29@opensource.wdc.com> <80dc24c5-2c4c-b8da-5017-31aae65a4dfa@opensource.wdc.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-ide@vger.kernel.org On Fri, Dec 16, 2022 at 08:31:54PM -0600, Linus Torvalds wrote: > Ok, let's bring in Waiman for the rwlock side. > > On Fri, Dec 16, 2022 at 5:54 PM Boqun Feng wrote: > > > > Right, for a reader not in_interrupt(), it may be blocked by a random > > waiting writer because of the fairness, even the lock is currently held > > by a reader: > > > > CPU 1 CPU 2 CPU 3 > > read_lock(&tasklist_lock); // get the lock > > > > write_lock_irq(&tasklist_lock); // wait for the lock > > > > read_lock(&tasklist_lock); // cannot get the lock because of the fairness > > But this should be ok - because CPU1 can make progress and eventually > release the lock. > Yes. > So the tasklist_lock use is fine on its own - the reason interrupts > are special is because an interrupt on CPU 1 taking the lock for > reading would deadlock otherwise. As long as it happens on another > CPU, the original CPU should then be able to make progress. > > But the problem here seems to be thst *another* lock is also involved > (in this case apparently "host->lock", and now if CPU1 and CPU2 get > these two locks in a different order, you can get an ABBA deadlock. > Right. > And apparently our lockdep machinery doesn't catch that issue, so it > doesn't get flagged. > I'm confused. Isn't the original problem showing that lockdep catches this? > I'm not sure what the lockdep rules for rwlocks are, but maybe lockdep > treats rwlocks as being _always_ unfair, not knowing about that "it's > only unfair when it's in interrupt context". > The rules nowadays are: * If the reader is in_interrupt() or queued-spinlock implemention is not used, it's an unfair reader, i.e. it won't wait for any existing writer. * Otherwise, it's a fair reader. > Maybe we need to always make rwlock unfair? Possibly only for tasklist_lock? > That's possible, but I need to make sure I understand the issue for lockdep. It's that lockdep misses catching something or it has a false positive? Regards, Boqun > Oh, how I hate tasklist_lock. It's pretty much our one remaining "one > big lock". It's been a pain for a long long time. > > Linus