From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 511D4C4320A for ; Thu, 19 Aug 2021 21:40:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3129D610A0 for ; Thu, 19 Aug 2021 21:40:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235445AbhHSVku (ORCPT ); Thu, 19 Aug 2021 17:40:50 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:43471 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229619AbhHSVkr (ORCPT ); Thu, 19 Aug 2021 17:40:47 -0400 X-Greylist: delayed 7132 seconds by postgrey-1.27 at vger.kernel.org; Thu, 19 Aug 2021 17:40:47 EDT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1629409210; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=DvdcXZIwl3tPZt3OBpuE3xqKVNbMTKzJBp4rw8ENU30=; b=FjwSGTPBYVlTqBtCrCdhOcvRbjtKjjXFcTekrCe/GPm+vLdSYs1Y+NQtQHVcsrfe94EPIf lAWwMS3e8s+7jc3kKX25VU8Y5BDNh6YNcjecX5qOApOzvchgCOkZGk9xQvEn6UB374i+A0 dfIaOOJctFO8QHMAq82Ewzs+Bj+pUnw= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-13-Vyy1lXZAMlK7no_9jxagzQ-1; Thu, 19 Aug 2021 17:40:09 -0400 X-MC-Unique: Vyy1lXZAMlK7no_9jxagzQ-1 Received: by mail-wm1-f69.google.com with SMTP id z18-20020a1c7e120000b02902e69f6fa2e0so1561617wmc.9 for ; Thu, 19 Aug 2021 14:40:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=DvdcXZIwl3tPZt3OBpuE3xqKVNbMTKzJBp4rw8ENU30=; b=OTnZ7KJtRddsZssF3Ir3w8fRHCPe23r+MiyRMVYQIyzfMc87hWKm+Q0d/7r9PlRfAN U1QMaaLcAw6Y1T4yPDLrYrHGtoW9jPAnFokZv+GH/Bx8m6/Hft34oqn2wk+tvmQwNOa7 f3rNxjRLh3y1ph3cozO6pq5eTcYATGBtkfySeSaXYv14LuTe91gOLUtMKk6Cq9VDDU/c fvMfR3EKxp2S2xFY/0AK5u09zOVGq74hg5BI3+kIZ6PtJeVWkAEC5eZxEmGdu2f4mLml GwLnEWATFXY7YYqEsD5nf/J1PiFxxoHmaanperjhU/MdoGuZoaT39yBUVo1c+sVJamVo V6Bg== X-Gm-Message-State: AOAM5312aOF+wSavKDC/XpPrbSL6oV4FdzYOp6E/nWX2dVssfdRdKbkH icRcm3jFI+uk/Yfo5EKoOUNqac3U8DexrquMo8EuYzsaVWycAV65po+dSgZ3xYi1zLNGHlVKenC 2dIIiMyPQVjGh3ODF0XmA/kQO9Rk16fgUw0zKVj9M X-Received: by 2002:a1c:7916:: with SMTP id l22mr623009wme.27.1629409208119; Thu, 19 Aug 2021 14:40:08 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxYA97+cGFeqU43YBkPI260mv8aQOOw6JSQmOGyumgFbyOD0nefEq4MgAFBQrx6D74nVITzdNSFMePqB4OBxCo= X-Received: by 2002:a1c:7916:: with SMTP id l22mr622994wme.27.1629409207884; Thu, 19 Aug 2021 14:40:07 -0700 (PDT) MIME-Version: 1.0 References: <20210803191818.993968-1-agruenba@redhat.com> In-Reply-To: From: Andreas Gruenbacher Date: Thu, 19 Aug 2021 23:39:56 +0200 Message-ID: Subject: Re: [PATCH v5 00/12] gfs2: Fix mmap + page fault deadlocks To: Linus Torvalds Cc: Alexander Viro , Christoph Hellwig , "Darrick J. Wong" , Paul Mackerras , Jan Kara , Matthew Wilcox , cluster-devel , linux-fsdevel , Linux Kernel Mailing List , ocfs2-devel@oss.oracle.com, kvm-ppc@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 19, 2021 at 10:14 PM Linus Torvalds wrote: > On Thu, Aug 19, 2021 at 12:41 PM Andreas Gruenbacher > wrote: > > > > Hmm, what if GUP is made to skip VM_IO vmas without adding anything to > > the pages array? That would match fault_in_iov_iter_writeable, which > > is modeled after __mm_populate and which skips VM_IO and VM_PFNMAP > > vmas. > > I don't understand what you mean.. GUP already skips VM_IO (and > VM_PFNMAP) pages. It just returns EFAULT. > > We could make it return another error. We already have DAX and > FOLL_LONGTERM returning -EOPNOTSUPP. > > Of course, I think some code ends up always just returning "number of > pages looked up" and might return 0 for "no pages" rather than the > error for the first page. > > So we may end up having interfaces that then lose that explanation > error code, but I didn't check. > > But we couldn't make it just say "skip them and try later addresses", > if that is what you meant. THAT makes no sense - that would just make > GUP look up some other address than what was asked for. get_user_pages has a start and a nr_pages argument, which specifies an address range from start to start + nr_pages * PAGE_SIZE. If pages != NULL, it adds a pointer to that array for each PAGE_SIZE subpage. I was thinking of skipping over VM_IO vmas in that process, so when the range starts in a mappable vma, runs into a VM_IO vma, and ends in a mappable vma, the pages in the pages array would be discontiguous; they would only cover the mappable vmas. But that would make it difficult to make sense of what's in the pages array. So scratch that idea. > > > I also do still think that even regardless of that, we want to just > > > add a FOLL_NOFAULT flag that just disables calling handle_mm_fault(), > > > and then you can use the regular get_user_pages(). > > > > > > That at least gives us the full _normal_ page handling stuff. > > > > And it does fix the generic/208 failure. > > Good. So I think the approach is usable, even if we might have corner > cases left. > > So I think the remaining issue is exactly things like VM_IO and > VM_PFNMAP. Do the fstests have test-cases for things like this? It > _is_ quite specialized, it might be a good idea to have that. > > Of course, doing direct-IO from special memory regions with zerocopy > might be something special people actually want to do. But I think > we've had that VM_IO flag testing there basically forever, so I don't > think it has ever worked (for some definition of "ever"). The v6 patch queue should handle those cases acceptably well for now, but I don't think we have tests covering that at all. Thanks, Andreas