From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08BCDC433EF for ; Tue, 8 Mar 2022 17:23:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345651AbiCHRYf (ORCPT ); Tue, 8 Mar 2022 12:24:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349951AbiCHRYG (ORCPT ); Tue, 8 Mar 2022 12:24:06 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 50AB94D612 for ; Tue, 8 Mar 2022 09:23:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1646760188; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ehpQ55FPqLbh+bqZZzZISzAZpDG5XN0glN4zhHthyoE=; b=b8A6T3RTXbiaEsEYF2w72UKMnh5wMl8IyBap55Xy1uebD5vzgXTZ48409aIa+YcAEi0Z0h hrsNq04Y/+ooBMCF0l1tWXQWb9b9mg/BMAGyiTv8xvNxb/xbqFC3P4QSV+jI4jnnBTQtY6 u265EU9NZ4KEmdrlKuLkzYuVrp7pHHs= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-67-CFd2ObCLMyWS0YzWo-9zBw-1; Tue, 08 Mar 2022 12:23:07 -0500 X-MC-Unique: CFd2ObCLMyWS0YzWo-9zBw-1 Received: by mail-wm1-f72.google.com with SMTP id v67-20020a1cac46000000b00383e71bb26fso6584755wme.1 for ; Tue, 08 Mar 2022 09:23:06 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:from:to:cc:references:organization:in-reply-to :content-transfer-encoding; bh=ehpQ55FPqLbh+bqZZzZISzAZpDG5XN0glN4zhHthyoE=; b=QfXm5XVwAjdgaoiir6d9W7j4gIEmi+eFqsrx+QJp7Pe4IxuUIlqtsXFsEzPjgW7Sew kdtPsFIR7R2wRDt9YEBcEE5+WCh2tvfm1ffQaLkXzWHoZO0Vs0Rb2g6qE6IfpiJ8bA7r czbDfzNnORxxhSDPnB/E6NcmRhvPrUEiWP2iDLCktA/rF/rj4LDXxr4UOXLswXlJRoS8 8gAIsYOxra8h43PIAL/6vCGRwZI8ekfOiNxmELG8A8KQ1b4Qj0ga1pbcgMYge3aZd9I6 lQoddQKXfgevONBkU/rkwTD9illtXwilhms3OBF0Qf0Ta7qPdgQCLfIqb8i+dqhzYugO y2yQ== X-Gm-Message-State: AOAM532h0YXsj6nHpKUFo8+eArkzOPrvATHg9V6Vddku416QsiSPODJM 70Mgk3Wpei5gNN4oMDR3zrt0xySW5KnjcIxWxRX8ZNRVzl0/WMt17f3cCbqz262EaocfBPxKBec 17GlCmo/t+n0Y1wQRI4pSq5w= X-Received: by 2002:adf:e983:0:b0:1f1:f52b:8328 with SMTP id h3-20020adfe983000000b001f1f52b8328mr9205634wrm.513.1646760185912; Tue, 08 Mar 2022 09:23:05 -0800 (PST) X-Google-Smtp-Source: ABdhPJzUF6hS4tVv9SdXDAhNxe/nXaZ+S9wcf2zLRYF6vCBHQfFYvGiWDAh/y1wO+/+4wt/l5tlMYg== X-Received: by 2002:adf:e983:0:b0:1f1:f52b:8328 with SMTP id h3-20020adfe983000000b001f1f52b8328mr9205602wrm.513.1646760185590; Tue, 08 Mar 2022 09:23:05 -0800 (PST) Received: from ?IPV6:2003:cb:c708:b000:acda:b420:16aa:6b67? (p200300cbc708b000acdab42016aa6b67.dip0.t-ipconnect.de. [2003:cb:c708:b000:acda:b420:16aa:6b67]) by smtp.gmail.com with ESMTPSA id i8-20020a7bc948000000b003898dfd7990sm2929838wml.29.2022.03.08.09.23.04 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 08 Mar 2022 09:23:05 -0800 (PST) Message-ID: Date: Tue, 8 Mar 2022 18:23:04 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.5.0 Subject: Re: Buffered I/O broken on s390x with page faults disabled (gfs2) Content-Language: en-US From: David Hildenbrand To: Linus Torvalds , Andreas Gruenbacher Cc: Alexander Viro , linux-s390 , Linux-MM , linux-fsdevel , linux-btrfs , Christian Borntraeger , Jason Gunthorpe , John Hubbard , "akpm@linux-foundation.org" , Heiko Carstens References: <2266e1a8-ac79-94a1-b6e2-47475e5986c5@redhat.com> <81f2f76d-24ef-c23b-449e-0b8fdec506e1@redhat.com> <1bdb0184-696c-0f1a-3054-d88391c32e64@redhat.com> Organization: Red Hat In-Reply-To: <1bdb0184-696c-0f1a-3054-d88391c32e64@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org On 08.03.22 13:24, David Hildenbrand wrote: > On 08.03.22 13:11, David Hildenbrand wrote: >> On 08.03.22 09:37, David Hildenbrand wrote: >>> On 08.03.22 09:21, David Hildenbrand wrote: >>>> On 08.03.22 00:18, Linus Torvalds wrote: >>>>> On Mon, Mar 7, 2022 at 2:52 PM Andreas Gruenbacher wrote: >>>>>> >>>>>> After generic_file_read_iter() returns a short or empty read, we fault >>>>>> in some pages with fault_in_iov_iter_writeable(). This succeeds, but >>>>>> the next call to generic_file_read_iter() returns -EFAULT and we're >>>>>> not making any progress. >>>>> >>>>> Since this is s390-specific, I get the very strong feeling that the >>>>> >>>>> fault_in_iov_iter_writeable -> >>>>> fault_in_safe_writeable -> >>>>> __get_user_pages_locked -> >>>>> __get_user_pages >>>>> >>>>> path somehow successfully finds the page, despite it not being >>>>> properly accessible in the page tables. >>>> >>>> As raised offline already, I suspect >>>> >>>> shrink_active_list() >>>> ->page_referenced() >>>> ->page_referenced_one() >>>> ->ptep_clear_flush_young_notify() >>>> ->ptep_clear_flush_young() >>>> >>>> which results on s390x in: >>>> >>>> static inline pte_t pte_mkold(pte_t pte) >>>> { >>>> pte_val(pte) &= ~_PAGE_YOUNG; >>>> pte_val(pte) |= _PAGE_INVALID; >>>> return pte; >>>> } >>>> >>>> static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, >>>> unsigned long addr, pte_t *ptep) >>>> { >>>> pte_t pte = *ptep; >>>> >>>> pte = ptep_xchg_direct(vma->vm_mm, addr, ptep, pte_mkold(pte)); >>>> return pte_young(pte); >>>> } >>>> >>>> >>>> _PAGE_INVALID is the actual HW bit, _PAGE_PRESENT is a >>>> pure SW bit. AFAIU, pte_present() still holds: >>>> >>>> static inline int pte_present(pte_t pte) >>>> { >>>> /* Bit pattern: (pte & 0x001) == 0x001 */ >>>> return (pte_val(pte) & _PAGE_PRESENT) != 0; >>>> } >>>> >>>> >>>> pte_mkyoung() will revert that action: >>>> >>>> static inline pte_t pte_mkyoung(pte_t pte) >>>> { >>>> pte_val(pte) |= _PAGE_YOUNG; >>>> if (pte_val(pte) & _PAGE_READ) >>>> pte_val(pte) &= ~_PAGE_INVALID; >>>> return pte; >>>> } >>>> >>>> >>>> and pte_modify() will adjust it properly again: >>>> >>>> /* >>>> * The following pte modification functions only work if >>>> * pte_present() is true. Undefined behaviour if not.. >>>> */ >>>> static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) >>>> { >>>> pte_val(pte) &= _PAGE_CHG_MASK; >>>> pte_val(pte) |= pgprot_val(newprot); >>>> /* >>>> * newprot for PAGE_NONE, PAGE_RO, PAGE_RX, PAGE_RW and PAGE_RWX >>>> * has the invalid bit set, clear it again for readable, young pages >>>> */ >>>> if ((pte_val(pte) & _PAGE_YOUNG) && (pte_val(pte) & _PAGE_READ)) >>>> pte_val(pte) &= ~_PAGE_INVALID; >>>> /* >>>> * newprot for PAGE_RO, PAGE_RX, PAGE_RW and PAGE_RWX has the page >>>> * protection bit set, clear it again for writable, dirty pages >>>> */ >>>> if ((pte_val(pte) & _PAGE_DIRTY) && (pte_val(pte) & _PAGE_WRITE)) >>>> pte_val(pte) &= ~_PAGE_PROTECT; >>>> return pte; >>>> } >>>> >>>> >>>> >>>> Which leaves me wondering if there is a way in GUP whereby >>>> we would lookup that page and not clear _PAGE_INVALID, >>>> resulting in GUP succeeding but faults via the MMU still >>>> faulting on _PAGE_INVALID. >>> >>> >>> follow_page_pte() has this piece of code: >>> >>> if (flags & FOLL_TOUCH) { >>> if ((flags & FOLL_WRITE) && >>> !pte_dirty(pte) && !PageDirty(page)) >>> set_page_dirty(page); >>> /* >>> * pte_mkyoung() would be more correct here, but atomic care >>> * is needed to avoid losing the dirty bit: it is easier to use >>> * mark_page_accessed(). >>> */ >>> mark_page_accessed(page); >>> } >>> >>> Which at least to me suggests that, although the page is marked accessed and GUP >>> succeeds, that the PTE might still have _PAGE_INVALID set after we succeeded GUP. >>> >>> >>> On s390x, there is no HW dirty bit, so we might just be able to do a proper >>> pte_mkyoung() here instead of the mark_page_accessed(). >>> >> >> Something hacky like this should be able to show if what I suspect is the case. >> It compiles, but I didn't actually test it. > That would be the alternative that also takes the mkdirty into account: > > > From 1e51e8a93894f87c0a4d0e908391e0628ae56afe Mon Sep 17 00:00:00 2001 > From: David Hildenbrand > Date: Tue, 8 Mar 2022 12:51:26 +0100 > Subject: [PATCH] mm/gup: fix buffered I/O on s390x with pagefaults disabled > > On s390x, we actually need a pte_mkyoung() / pte_mkdirty() instead of > going via the page and leaving the PTE unmodified. E.g., if we only > mark the page accessed via mark_page_accessed() when doing a FOLL_TOUCH, > we'll miss to clear the HW invalid bit in the pte and subsequent accesses > via the MMU would still require a pagefault. > > Otherwise, buffered I/O will loop forever because it will keep stumling > over the set HW invalid bit, requiring a page fault. > > Reported-by: Andreas Gruenbacher > Signed-off-by: David Hildenbrand > --- > mm/gup.c | 32 +++++++++++++++++++++++++------- > 1 file changed, 25 insertions(+), 7 deletions(-) > > diff --git a/mm/gup.c b/mm/gup.c > index a9d4d724aef7..de3311feb377 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -587,15 +587,33 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, > } > } > if (flags & FOLL_TOUCH) { > - if ((flags & FOLL_WRITE) && > - !pte_dirty(pte) && !PageDirty(page)) > - set_page_dirty(page); > /* > - * pte_mkyoung() would be more correct here, but atomic care > - * is needed to avoid losing the dirty bit: it is easier to use > - * mark_page_accessed(). > + * We have to be careful with updating the PTE on architectures > + * that have a HW dirty bit: while updating the PTE we might > + * lose that bit again and we'd need an atomic update: it is > + * easier to leave the PTE untouched for these architectures. > + * > + * s390x doesn't have a hw referenced / dirty bit and e.g., sets > + * the hw invalid bit in pte_mkold(), to catch further > + * references. We have to update the PTE here to e.g., clear the > + * invalid bit; otherwise, callers that rely on not requiring > + * an MMU fault once GUP(FOLL_TOUCH) succeeded will loop forever > + * because the page won't actually be accessible via the MMU. > */ > - mark_page_accessed(page); > + if (IS_ENABLED(CONFIG_S390)) { > + pte = pte_mkyoung(pte); > + if (flags & FOLL_WRITE) > + pte = pte_mkdirty(pte); > + if (!pte_same(pte, *ptep)) { > + set_pte_at(vma->vm_mm, address, ptep, pte); > + update_mmu_cache(vma, address, ptep); I just reproduced without this fix and then tried to reproduce with this fix (however, replacing pte_same() + set_pte_at() by a ptep_set_access_flags()). Seems to do the trick for me. I'll figure out the best way to handle IS_ENABLED(CONFIG_S390) before posting an official fix. -- Thanks, David / dhildenb