From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32C29C433DB for ; Wed, 24 Feb 2021 17:44:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6F2EE64EDB for ; Wed, 24 Feb 2021 17:44:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6F2EE64EDB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BCE536B0005; Wed, 24 Feb 2021 12:44:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B80F66B0006; Wed, 24 Feb 2021 12:44:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A46D96B006C; Wed, 24 Feb 2021 12:44:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0135.hostedemail.com [216.40.44.135]) by kanga.kvack.org (Postfix) with ESMTP id 90A516B0005 for ; Wed, 24 Feb 2021 12:44:13 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 594D945BC for ; Wed, 24 Feb 2021 17:44:13 +0000 (UTC) X-FDA: 77853885186.11.866EA4F Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf27.hostedemail.com (Postfix) with ESMTP id 6EF0C80192C7 for ; Wed, 24 Feb 2021 17:44:06 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 5142EADDB; Wed, 24 Feb 2021 17:44:11 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 19A281E14EE; Wed, 24 Feb 2021 18:44:11 +0100 (CET) Date: Wed, 24 Feb 2021 18:44:11 +0100 From: Jan Kara To: Matthew Wilcox Cc: Jan Kara , linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Hellwig , Kent Overstreet Subject: Re: [RFC] Better page cache error handling Message-ID: <20210224174411.GH849@quack2.suse.cz> References: <20210205161142.GI308988@casper.infradead.org> <20210224123848.GA27695@quack2.suse.cz> <20210224134115.GP2858050@casper.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210224134115.GP2858050@casper.infradead.org> User-Agent: Mutt/1.10.1 (2018-07-13) X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 6EF0C80192C7 X-Stat-Signature: 6zmnpgd3dbf3rxzsd44sgu5xskqsfhwh Received-SPF: none (suse.cz>: No applicable sender policy available) receiver=imf27; identity=mailfrom; envelope-from=""; helo=mx2.suse.de; client-ip=195.135.220.15 X-HE-DKIM-Result: none/none X-HE-Tag: 1614188646-350254 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed 24-02-21 13:41:15, Matthew Wilcox wrote: > On Wed, Feb 24, 2021 at 01:38:48PM +0100, Jan Kara wrote: > > > We allocate a page and try to read it. 29 threads pile up waiting > > > for the page lock in filemap_update_page(). The error returned by the > > > original I/O is shared between all 29 waiters as well as being returned > > > to the requesting thread. The next request for index.html will send > > > another I/O, and more waiters will pile up trying to get the page lock, > > > but at no time will more than 30 threads be waiting for the I/O to fail. > > > > Interesting idea. It certainly improves current behavior. I just wonder > > whether this isn't a partial solution to a problem and a full solution of > > it would have to go in a different direction? I mean it just seems > > wrong that each reader (let's assume they just won't overlap) has to retry > > the failed IO and wait for the HW to figure out it's not going to work. > > Shouldn't we cache the error state with the page? And I understand that we > > then also have to deal with the problem how to invalidate the error state > > when the block might eventually become readable (for stuff like temporary > > IO failures). That would need some signalling from the driver to the page > > cache, maybe in a form of some error recovery sequence counter or something > > like that. For stuff like iSCSI, multipath, or NBD it could be doable I > > believe... > > That felt like a larger change than I wanted to make. I already have > a few big projects on my plate! I can understand that ;) > Also, it's not clear to me that the host can necessarily figure out when > a device has fixed an error -- certainly for the three cases you list > it can be done. I think we'd want a timer to indicate that it's worth > retrying instead of returning the error. > > Anyway, that seems like a lot of data to cram into a struct page. So I > think my proposal is still worth pursuing while waiting for someone to > come up with a perfect solution. Yes, timer could be a fallback. Or we could just schedule work to discard all 'error' pages in the fs in an hour or so. Not perfect but more or less workable I'd say. Also I don't think we need to cram this directly into struct page - I think it is perfectly fine to kmalloc() structure we need for caching if we hit error and just don't cache if the allocation fails. Then we might just reference it from appropriate place... didn't put too much thought to this... Honza -- Jan Kara SUSE Labs, CR