From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE4E4C433EF for ; Thu, 21 Oct 2021 10:05:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AA228600CC for ; Thu, 21 Oct 2021 10:05:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231671AbhJUKIM (ORCPT ); Thu, 21 Oct 2021 06:08:12 -0400 Received: from mail.kernel.org ([198.145.29.99]:47898 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231334AbhJUKIL (ORCPT ); Thu, 21 Oct 2021 06:08:11 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 064A8610FF; Thu, 21 Oct 2021 10:05:53 +0000 (UTC) Date: Thu, 21 Oct 2021 11:05:50 +0100 From: Catalin Marinas To: Andreas Gruenbacher Cc: Linus Torvalds , Al Viro , Christoph Hellwig , "Darrick J. Wong" , Jan Kara , Matthew Wilcox , cluster-devel , linux-fsdevel , Linux Kernel Mailing List , "ocfs2-devel@oss.oracle.com" , Josef Bacik , Will Deacon Subject: Re: [RFC][arm64] possible infinite loop in btrfs search_ioctl() Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Thu, Oct 21, 2021 at 02:46:10AM +0200, Andreas Gruenbacher wrote: > On Tue, Oct 12, 2021 at 1:59 AM Linus Torvalds > wrote: > > On Mon, Oct 11, 2021 at 2:08 PM Catalin Marinas wrote: > > > > > > +#ifdef CONFIG_ARM64_MTE > > > +#define FAULT_GRANULE_SIZE (16) > > > +#define FAULT_GRANULE_MASK (~(FAULT_GRANULE_SIZE-1)) > > > > [...] > > > > > If this looks in the right direction, I'll do some proper patches > > > tomorrow. > > > > Looks fine to me. It's going to be quite expensive and bad for caches, though. > > > > That said, fault_in_writable() is _supposed_ to all be for the slow > > path when things go south and the normal path didn't work out, so I > > think it's fine. > > Let me get back to this; I'm actually not convinced that we need to > worry about sub-page-size fault granules in fault_in_pages_readable or > fault_in_pages_writeable. > > From a filesystem point of view, we can get into trouble when a > user-space read or write triggers a page fault while we're holding > filesystem locks, and that page fault ends up calling back into the > filesystem. To deal with that, we're performing those user-space > accesses with page faults disabled. Yes, this makes sense. > When a page fault would occur, we > get back an error instead, and then we try to fault in the offending > pages. If a page is resident and we still get a fault trying to access > it, trying to fault in the same page again isn't going to help and we > have a true error. You can't be sure the second fault is a true error. The unlocked fault_in_*() may race with some LRU scheme making the pte not accessible or a write-back making it clean/read-only. copy_to_user() with pagefault_disabled() fails again but that's a benign fault. The filesystem should re-attempt the fault-in (gup would correct the pte), disable page faults and copy_to_user(), potentially in an infinite loop. If you bail out on the second/third uaccess following a fault_in_*() call, you may get some unexpected errors (though very rare). Maybe the filesystems avoid this problem somehow but I couldn't figure it out. > We're clearly looking at memory at a page > granularity; faults at a sub-page level don't matter at this level of > abstraction (but they do show similar error behavior). To avoid > getting stuck, when it gets a short result or -EFAULT, the filesystem > implements the following backoff strategy: first, it tries to fault in > a number of pages. When the read or write still doesn't make progress, > it scales back and faults in a single page. Finally, when that still > doesn't help, it gives up. This strategy is needed for actual page > faults, but it also handles sub-page faults appropriately as long as > the user-space access functions give sensible results. As I said above, I think with this approach there's a small chance of incorrectly reporting an error when the fault is recoverable. If you change it to an infinite loop, you'd run into the sub-page fault problem. There are some places with such infinite loops: futex_wake_op(), search_ioctl() in the btrfs code. I still have to get my head around generic_perform_write() but I think we get away here because it faults in the page with a get_user() rather than gup (and copy_from_user() is guaranteed to make progress if any bytes can still be accessed). -- Catalin