From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDC8CC433E6 for ; Mon, 1 Mar 2021 20:56:38 +0000 (UTC) Received: from userp2120.oracle.com (userp2120.oracle.com [156.151.31.85]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 65AC7601FA for ; Mon, 1 Mar 2021 20:56:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 65AC7601FA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=ocfs2-devel-bounces@oss.oracle.com Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 121KsuLI125373; Mon, 1 Mar 2021 20:56:37 GMT Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79]) by userp2120.oracle.com with ESMTP id 36yeqmwa96-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 01 Mar 2021 20:56:37 +0000 Received: from pps.filterd (userp3020.oracle.com [127.0.0.1]) by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 121KuTRn072383; Mon, 1 Mar 2021 20:56:37 GMT Received: from oss.oracle.com (oss-old-reserved.oracle.com [137.254.22.2]) by userp3020.oracle.com with ESMTP id 36yyur37qd-1 (version=TLSv1 cipher=AES256-SHA bits=256 verify=NO); Mon, 01 Mar 2021 20:56:37 +0000 Received: from localhost ([127.0.0.1] helo=lb-oss.oracle.com) by oss.oracle.com with esmtp (Exim 4.63) (envelope-from ) id 1lGpam-0000Bt-1F; Mon, 01 Mar 2021 12:56:36 -0800 Received: from userp3020.oracle.com ([156.151.31.79]) by oss.oracle.com with esmtp (Exim 4.63) (envelope-from ) id 1lGpaI-0000Ay-7w for ocfs2-devel@oss.oracle.com; Mon, 01 Mar 2021 12:56:06 -0800 Received: from pps.filterd (userp3020.oracle.com [127.0.0.1]) by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 121KoTaP050350 for ; Mon, 1 Mar 2021 20:56:05 GMT Received: from userp2030.oracle.com (userp2030.oracle.com [156.151.31.89]) by userp3020.oracle.com with ESMTP id 36yyur379u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK) for ; Mon, 01 Mar 2021 20:56:05 +0000 Received: from pps.filterd (userp2030.oracle.com [127.0.0.1]) by userp2030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 121KrQ8g010631 for ; Mon, 1 Mar 2021 20:56:05 GMT Received: from mail-ed1-f52.google.com (mail-ed1-f52.google.com [209.85.208.52]) by userp2030.oracle.com with ESMTP id 36yc4hg9p5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=OK) for ; Mon, 01 Mar 2021 20:56:05 +0000 Received: by mail-ed1-f52.google.com with SMTP id h10so22531700edl.6 for ; Mon, 01 Mar 2021 12:56:04 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Uf7JqQFHPCXtVt6OE3ndJLn8tz0ITkjzN5ljfjxdIhk=; b=DDaGzAlvIlw/eYpFnbnHVp/GJpDJeKHxezECmxNPm0yaqhup6V4yhRub7d7JQoialH 5yB5d/SssHl8p+T4pyOFQ1EREEGDY29FncM6NEgn3OKzAMhIfuwzqsr+FuGdpvLOgiUV DVPUBVx8Oi48Pb1p8IZIFEKbxiB2DRx4F0aVgYVMCAU53eijM9R+zO/FEKZhcCGM24y/ JYyl6HmoYq6HRKtFwBtQ4aAXK9155y39fIhLApZdyhxewFlW6J25hKCO7mzY8hxK+6ZO XYHqxkseXnwEyCifOQBIQKU/SoS9MwJ/UWWe02ptA6MY4DTtU8GA9TkaX4Gh4vhR90vx TLqw== X-Gm-Message-State: AOAM532HXBKto8vD9FkALxqaXxs4bTFmBLZiE82XckF9idKau0IMaZoC 70es5lu0Xn84Gzb05j0vHNa0nPm3dUyEfBNVWiaSwg== X-Google-Smtp-Source: ABdhPJxBwKnueL85Spjb8OhqRdQvSAhEOhjHLo+ym/acf8R+C2DQPXIFH/M6CoSbCOsavo9bD3fi05mLAmEjhXAU4gc= X-Received: by 2002:a05:6402:3585:: with SMTP id y5mr17766218edc.97.1614632162323; Mon, 01 Mar 2021 12:56:02 -0800 (PST) MIME-Version: 1.0 References: <20210226002030.653855-1-ruansy.fnst@fujitsu.com> <20210226190454.GD7272@magnolia> <20210226205126.GX4662@dread.disaster.area> <20210226212748.GY4662@dread.disaster.area> <20210227223611.GZ4662@dread.disaster.area> <20210228223846.GA4662@dread.disaster.area> In-Reply-To: <20210228223846.GA4662@dread.disaster.area> From: Dan Williams Date: Mon, 1 Mar 2021 12:55:53 -0800 Message-ID: To: Dave Chinner X-PDR: PASS X-Source-IP: 209.85.208.52 X-ServerName: mail-ed1-f52.google.com X-Proofpoint-SPF-Result: pass X-Proofpoint-SPF-Record: v=spf1 include:_spf.intel.com include:_spf.google.com -all X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9910 signatures=668683 X-Proofpoint-Spam-Details: rule=tap_notspam policy=tap score=0 bulkscore=0 adultscore=0 mlxlogscore=999 suspectscore=0 malwarescore=0 impostorscore=0 lowpriorityscore=0 clxscore=235 spamscore=0 mlxscore=0 phishscore=0 priorityscore=149 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2103010169 X-Spam: Clean Cc: "y-goto@fujitsu.com" , "jack@suse.cz" , "fnstml-iaas@cn.fujitsu.com" , "linux-nvdimm@lists.01.org" , "darrick.wong@oracle.com" , "linux-kernel@vger.kernel.org" , "ruansy.fnst@fujitsu.com" , "linux-xfs@vger.kernel.org" , "ocfs2-devel@oss.oracle.com" , "viro@zeniv.linux.org.uk" , "linux-fsdevel@vger.kernel.org" , "qi.fuli@fujitsu.com" , "linux-btrfs@vger.kernel.org" Subject: Re: [Ocfs2-devel] Question about the "EXPERIMENTAL" tag for dax in XFS X-BeenThere: ocfs2-devel@oss.oracle.com X-Mailman-Version: 2.1.9 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: ocfs2-devel-bounces@oss.oracle.com Errors-To: ocfs2-devel-bounces@oss.oracle.com X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9910 signatures=668683 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 spamscore=0 suspectscore=0 mlxlogscore=999 bulkscore=0 adultscore=0 phishscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2103010170 X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9910 signatures=668683 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 lowpriorityscore=0 clxscore=1015 priorityscore=1501 mlxlogscore=999 suspectscore=0 malwarescore=0 impostorscore=0 bulkscore=0 adultscore=0 mlxscore=0 phishscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2103010170 On Sun, Feb 28, 2021 at 2:39 PM Dave Chinner wrote: > > On Sat, Feb 27, 2021 at 03:40:24PM -0800, Dan Williams wrote: > > On Sat, Feb 27, 2021 at 2:36 PM Dave Chinner wrote: > > > On Fri, Feb 26, 2021 at 02:41:34PM -0800, Dan Williams wrote: > > > > On Fri, Feb 26, 2021 at 1:28 PM Dave Chinner wrote: > > > > > On Fri, Feb 26, 2021 at 12:59:53PM -0800, Dan Williams wrote: > > > it points to, check if it points to the PMEM that is being removed, > > > grab the page it points to, map that to the relevant struct page, > > > run collect_procs() on that page, then kill the user processes that > > > map that page. > > > > > > So why can't we walk the ptescheck the physical pages that they > > > map to and if they map to a pmem page we go poison that > > > page and that kills any user process that maps it. > > > > > > i.e. I can't see how unexpected pmem device unplug is any different > > > to an MCE delivering a hwpoison event to a DAX mapped page. > > > > I guess the tradeoff is walking a long list of inodes vs walking a > > large array of pages. > > Not really. You're assuming all a filesystem has to do is invalidate > everything if a device goes away, and that's not true. Finding if an > inode has a mapping that spans a specific device in a multi-device > filesystem can be a lot more complex than that. Just walking inodes > is easy - determining whihc inodes need invalidation is the hard > part. That inode-to-device level of specificity is not needed for the same reason that drop_caches does not need to be specific. If the wrong page is unmapped a re-fault will bring it back, and re-fault will fail for the pages that are successfully removed. > That's where ->corrupt_range() comes in - the filesystem is already > set up to do reverse mapping from physical range to inode(s) > offsets... Sure, but what is the need to get to that level of specificity with the filesystem for something that should rarely happen in the course of normal operation outside of a mistake? > > > There's likely always more pages than inodes, but perhaps it's more > > efficient to walk the 'struct page' array than sb->s_inodes? > > I really don't see you seem to be telling us that invalidation is an > either/or choice. There's more ways to convert physical block > address -> inode file offset and mapping index than brute force > inode cache walks.... Yes, but I was trying to map it to an existing mechanism and the internals of drop_pagecache_sb() are, in coarse terms, close to what needs to happen here. > > ..... > > > > IOWs, what needs to happen at this point is very filesystem > > > specific. Assuming that "device unplug == filesystem dead" is not > > > correct, nor is specifying a generic action that assumes the > > > filesystem is dead because a device it is using went away. > > > > Ok, I think I set this discussion in the wrong direction implying any > > mapping of this action to a "filesystem dead" event. It's just a "zap > > all ptes" event and upper layers recover from there. > > Yes, that's exactly what ->corrupt_range() is intended for. It > allows the filesystem to lock out access to the bad range > and then recover the data. Or metadata, if that's where the bad > range lands. If that recovery fails, it can then report a data > loss/filesystem shutdown event to userspace and kill user procs that > span the bad range... > > FWIW, is this notification going to occur before or after the device > has been physically unplugged? Before. This will be operations that happen in the pmem driver ->remove() callback. > i.e. what do we do about the > time-of-unplug-to-time-of-invalidation window where userspace can > still attempt to access the missing pmem though the > not-yet-invalidated ptes? It may not be likely that people just yank > pmem nvdimms out of machines, but with NVMe persistent memory > spaces, there's every chance that someone pulls the wrong device... The physical removal aspect is only theoretical today. While the pmem driver has a ->remove() path that's purely a software unbind operation. That said the vulnerability window today is if a process acquires a dax mapping, the pmem device hosting that filesystem goes through an unbind / bind cycle, and then a new filesystem is created / mounted. That old pte may be able to access data that is outside its intended protection domain. Going forward, for buses like CXL, there will be a managed physical remove operation via PCIE native hotplug. The flow there is that the PCIE hotplug driver will notify the OS of a pending removal, trigger ->remove() on the pmem driver, and then notify the technician (slot status LED) that the card is safe to pull. _______________________________________________ Ocfs2-devel mailing list Ocfs2-devel@oss.oracle.com https://oss.oracle.com/mailman/listinfo/ocfs2-devel