From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1742C06510 for ; Tue, 2 Jul 2019 15:38:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BCDA9204EC for ; Tue, 2 Jul 2019 15:38:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=intel-com.20150623.gappssmtp.com header.i=@intel-com.20150623.gappssmtp.com header.b="lWkCRmDV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726871AbfGBPiE (ORCPT ); Tue, 2 Jul 2019 11:38:04 -0400 Received: from mail-oi1-f193.google.com ([209.85.167.193]:38972 "EHLO mail-oi1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726150AbfGBPiE (ORCPT ); Tue, 2 Jul 2019 11:38:04 -0400 Received: by mail-oi1-f193.google.com with SMTP id m202so13396953oig.6 for ; Tue, 02 Jul 2019 08:38:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ATmVU16yayuH858JtpQ8da12QUT0HIEkBZC0SnOh7VQ=; b=lWkCRmDV2TTrVlfmg/FthFEquAfSnTCno0dxpAMUwrCCGDKTnkTRoXST11SO8f7+Tj uLeH3pNIUdwbwIKw9S2b2u9/5GT+4Cd/DQWVJu00PiNqkIqKZT3dG3HSbfE+lOZH996I trzHcR8MBfHQF8/mw7CdCjBtEpjmMjDT7HHB1rhi1/vT+2K8mXSrNaoJRq28qvxBwYmB cZG8TsM5i0PbkuR8K6WNwpeWJRLEmDsvRGmSVEsZEIJpdJDZUOK7He1/4bSzSHiRpel6 HXuZMpntGPdgHNqEKJv0EbB7LUZ1Vzya7oGE+yGXGO1ybjcsDyuJoGIpg1qd7MMvN0r6 uV4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ATmVU16yayuH858JtpQ8da12QUT0HIEkBZC0SnOh7VQ=; b=Vng+3c97kPkg/xdTaUaO3jvO2ca44NTYxY6gVfYd5ZYhs2o1afJ4SEmHqVsIVsbP3z sgltxxxVdUBmEzt9AbAnln6X89pV/PPqL3Fz0mdSExU32gjBvhPRK9odlNu7N8F4l7br mJkolBoMB8I4Cvfl5WF0fudXeftA2CULVLuBGwk/u3TTtkZOXEwyNglmdYljwEAmqiRb 3KhaHXXJOV2yj9EYWJ1x6rnBeOGZwP0WmP4S44LJyyjdBuJMwliQPPNcJ6G/7f5Yspfn NY8vZuUP8S4qgs4TaZa4h0OovwSeZSZFxiZev6am51UwQmhyqWhofVkuhnoKZtHlGSoX eaxg== X-Gm-Message-State: APjAAAWpO/xAlj+iqKb4o/X3UxvnFId5j+gKkmw7rkd1ToH0ugmNMXfY quK8J1ZGt+6X5tbeeDSjrUEX/Lk2DFypvrl/pwS5yw== X-Google-Smtp-Source: APXvYqwXecSoHqwn3Yh/wmhCfWrrvRoK57/wvgQNY7Y9kcpgG7iWPbKyR/hOnbbFMByE8MW5gqNtEsleDAoJWtVtsZw= X-Received: by 2002:aca:ba02:: with SMTP id k2mr3189786oif.70.1562081883341; Tue, 02 Jul 2019 08:38:03 -0700 (PDT) MIME-Version: 1.0 References: <20190627195948.GB4286@bombadil.infradead.org> <20190629160336.GB1180@bombadil.infradead.org> <20190630152324.GA15900@bombadil.infradead.org> <20190702033410.GB1729@bombadil.infradead.org> In-Reply-To: <20190702033410.GB1729@bombadil.infradead.org> From: Dan Williams Date: Tue, 2 Jul 2019 08:37:52 -0700 Message-ID: Subject: Re: [PATCH] filesystem-dax: Disable PMD support To: Matthew Wilcox Cc: Seema Pandit , linux-nvdimm , Linux Kernel Mailing List , stable , Robert Barror , linux-fsdevel , Jan Kara Content-Type: text/plain; charset="UTF-8" Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Mon, Jul 1, 2019 at 8:34 PM Matthew Wilcox wrote: > > On Sun, Jun 30, 2019 at 02:37:32PM -0700, Dan Williams wrote: > > On Sun, Jun 30, 2019 at 8:23 AM Matthew Wilcox wrote: > > > I think my theory was slightly mistaken, but your fix has the effect of > > > fixing the actual problem too. > > > > > > The xas->xa_index for a PMD is going to be PMD-aligned (ie a multiple of > > > 512), but xas_find_conflict() does _not_ adjust xa_index (... which I > > > really should have mentioned in the documentation). So we go to sleep > > > on the PMD-aligned index instead of the index of the PTE. Your patch > > > fixes this by using the PMD-aligned index for PTEs too. > > > > > > I'm trying to come up with a clean fix for this. Clearly we > > > shouldn't wait for a PTE entry if we're looking for a PMD entry. > > > But what should get_unlocked_entry() return if it detects that case? > > > We could have it return an error code encoded as an internal entry, > > > like grab_mapping_entry() does. Or we could have it return the _locked_ > > > PTE entry, and have callers interpret that. > > > > > > At least get_unlocked_entry() is static, but it's got quite a few callers. > > > Trying to discern which ones might ask for a PMD entry is a bit tricky. > > > So this seems like a large patch which might have bugs. > > > > > > Thoughts? > > > > ...but if it was a problem of just mismatched waitqueue's I would have > > expected it to trigger prior to commit b15cd800682f "dax: Convert page > > fault handlers to XArray". > > That commit converts grab_mapping_entry() (called by dax_iomap_pmd_fault()) > from calling get_unlocked_mapping_entry() to calling get_unlocked_entry(). > get_unlocked_mapping_entry() (eventually) called __radix_tree_lookup() > instead of dax_find_conflict(). > > > This hunk, if I'm reading it correctly, > > looks suspicious: @index in this case is coming directly from > > vm->pgoff without pmd alignment adjustment whereas after the > > conversion it's always pmd aligned from the xas->xa_index. So perhaps > > the issue is that the lock happens at pte granularity. I expect it > > would cause the old put_locked_mapping_entry() to WARN, but maybe that > > avoids the lockup and was missed in the bisect. > > I don't think that hunk is the problem. The __radix_tree_lookup() > is going to return a 'slot' which points to the canonical slot, no > matter which of the 512 indices corresponding to that slot is chosen. > So I think it's going to do essentially the same thing. Yeah, no warnings on the parent commit for the regression. I'd be inclined to do the brute force fix of not trying to get fancy with separate PTE/PMD waitqueues and then follow on with a more clever performance enhancement later. Thoughts about that?