From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EEFAC433F5 for ; Sat, 7 May 2022 17:45:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242983AbiEGRtD (ORCPT ); Sat, 7 May 2022 13:49:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34564 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235941AbiEGRtC (ORCPT ); Sat, 7 May 2022 13:49:02 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66E881116C for ; Sat, 7 May 2022 10:45:15 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 66BDDB80AE9 for ; Sat, 7 May 2022 17:45:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7C9F4C385A5; Sat, 7 May 2022 17:45:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1651945511; bh=DTXFI7BwIWPC32uy5Ce0k2M3egJnyGA2v/SJTkYTq3I=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=XzIYiMa6r+65AEN7sYLxRoMsW+LM256h/HhzIBOIt7z4mVfcrFiGmOso+MIcDTUqz CFARsjDEsS27COdPjjDlL5JUsI1k000gxjiDxkY6/FxR8XLucxkqMgrCUCbbEGAqYZ aY71bL5/P7XzinF6DtQaFnrOob735143J0UlzjrS+F7ryYKu8KgoPNh8o6tuoL/LwX h/36BxdCd1/OFJVUr3bZmzeavV8Ux38BbyT6rrqcQ2cesUcRYryCKW4O00Aerj9lZD /XUNU4rbJarwf57GxnXepeZk9Wb93ruWrdTs7OT5yllA83IFL4XYhyw4BKMvtjqrXq ZjHeAVt0G4YXA== Date: Sat, 7 May 2022 20:46:46 +0300 From: Jarkko Sakkinen To: Dave Hansen Cc: Reinette Chatre , dave.hansen@linux.intel.com, linux-sgx@vger.kernel.org, haitao.huang@intel.com Subject: Re: [RFC PATCH 0/4] SGX shmem backing store issue Message-ID: References: <825cee74-6581-1f3b-0a64-9480d6d4a8b8@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org On Mon, May 02, 2022 at 02:33:06PM -0700, Dave Hansen wrote: > On 5/2/22 10:11, Reinette Chatre wrote: > > My goal was to prevent the page fault handler from doing a "is the PCMD > > page empty" if the reclaimer has a reference to it. Even if the PCMD page > > is empty when the page fault handler checks it the page is expected to > > get data right when reclaimer can get the mutex back since the reclaimer > > already has a reference to the page. > > I think shmem_truncate_range() might be the wrong operation. It > destroys data and, in the end, we don't want to destroy data. > Filesystems and the page cache already have nice ways to keep from > destroying data, we just need to use them. > > First, I think set_page_dirty(pcmd_page) before the EWB is a good start. > That's what filesystems do before important data that needs to be saved > goes into pages. > > Second, I think we need behavior like POSIX_FADV_DONTNEED, not > FALLOC_FL_PUNCH_HOLE. The DONTNEED operations end up in > mapping_evict_folio(), which has both page refcount *and* dirty page > checks. That means that either elevating a refcount or set_page_dirty() > will thwart DONTNEED-like behavior. > > There are two basic things we need to do: > > 1. Prevent page from being truncated around EWB time > 2. Prevent unreferenced page with all zeros from staying in shmem > forever or taking up swap space > > On the EWB (write to PCMD side) I think something like this works: > > sgx_encl_get_backing() > get_page(pcmd_page) > > ... > lock_page(pcmd_page); > // check for truncation since sgx_encl_get_backing() > if (pcmd_page->mapping != shmem) > goto retry; > // double check this is OK under lock_page(): > set_page_dirty(pcmd_page); > __sgx_encl_ewb(); > unlock_page(pcmd_page); > > That's basically what filesystems do. Get the page from the page cache, > lock it, then make sure it is consistent. If not, retry. > > On the "free" / read in (ELDU) side: > > // get pcmd_page ref > lock_page(pcmd_page); > // truncation is not a concern because that's only done > // on the read-in side, here, where we hold encl->lock > > memset(); > if (!memchr_inv()) > // clear the way for DONTNEED: > ClearPageDirty(pcmd_page); > unlock_page(pcmd_page); > // drop pcmd_page ref > ... > POSIX_FADV_DONTNEED > > There's one downside to this: it's _possible_ that an transient > get_page() would block POSIX_FADV_DONTNEED. Then the zeroed page would > stick around forever, or at least until the next ELDU operation did > another memchr_inv(). > > I went looking around for some of those and could not find any that I > *know* apply to shmem. > > This doesn't feel like a great solution; it's more complicated than I > would like. Any other ideas? If we could do both truncation and swapping in one side, i.e. in ksgxd, that would simplify this process a lot. Then the whole synchronization problem would not exist. E.g. perhaps #PF handler could just zero PCMD and collect zeroed pages indices to a list and ksgxd would truncate them. BR, Jarkko