From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D0CEC74A5B for ; Tue, 21 Mar 2023 12:37:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231150AbjCUMhj (ORCPT ); Tue, 21 Mar 2023 08:37:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229473AbjCUMhe (ORCPT ); Tue, 21 Mar 2023 08:37:34 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C46FB4346F; Tue, 21 Mar 2023 05:37:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=EKPzzBQ3FvBycDuqBFuN9h8/3as7Wm8d7ysB+f2krm4=; b=nvjbX0z/nmnuQ6KwN3CNm8WoLu oyEVoMHPWDfIbLBsKuQBad4rZOOiRoj5uThoiY3ks5tCiIvftymvBulFkhxEeNft/MYn6IUmgQzTX xlWmqmGKYRdKZlKb/eP5Qupv1MBzXbv2T3mStJjk2S8lbDPiTtF5XVVPyLSxVGxUBHBsc+AeiNFyh pkrDNXbSNnzs4NeGGEdoBYxtdFWKiW9vRbGijsMbUyoNFz+Mw0BaTLocTU7JlVRj+1NbTg/zTSIRm Ts2x53hXcC+N11eHg7QYP8n945U6vZPrlZgqMjPrGyIJQj6eLTRMTWadfHu3OlFLvzqu45sIMa1Y9 qKQ4zYKw==; Received: from hch by bombadil.infradead.org with local (Exim 4.96 #2 (Red Hat Linux)) id 1pebF1-00CLpY-1d; Tue, 21 Mar 2023 12:37:27 +0000 Date: Tue, 21 Mar 2023 05:37:27 -0700 From: Christoph Hellwig To: Ulf Hansson Cc: Christoph Hellwig , Adrian Hunter , linux-mmc@vger.kernel.org, Wenchao Chen , Avri Altman , Christian Lohle , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Bean Huo Subject: Re: [PATCH] mmc: core: Allow to avoid REQ_FUA if the eMMC supports an internal cache Message-ID: References: <20230316164514.1615169-1-ulf.hansson@linaro.org> <522a5d01-e939-278a-3354-1bbfb1bd6557@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 20, 2023 at 04:24:36PM +0100, Ulf Hansson wrote: > > Neither to ATA or SCSI, but applications and file systems always very > > much expected it, so withou it storage devices would be considered > > fault. Only NVMe actually finally made it part of the standard. > > Even if the standard doesn't say, it's perfectly possible that the > storage device implements it. That's exactly what I'm saying above. > > But these are completely separate issue. Torn writes are completely > > unrelated to cache flushes. You can indeed work around torn writes > > by checksums, but not the lack of cache flushes or vice versa. > > It's not a separate issue for eMMC. Please read the complete commit > message for further clarifications in this regard. The commit message claims that checksums replace cache flushes. Which is dangerously wrong. So please don't refer me to it again - this dangerously incorrect commit message is wht alerted me to reply to the patch. > > > However, the issue has been raised that reliable write is not > > > needed to provide sufficient assurance of data integrity, and that > > > in fact, cache flush can be used instead and perform better. > > > > It does not. > > Can you please elaborate on this? Flushing caches does not replace the invariant of not tearing subsector writes. And if you need to use reliable writes for (some) devices to not tear sectors, no amount of cache flushing is going to paper over the problem.