From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CBCFC54E64 for ; Sat, 23 Mar 2024 00:13:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 13A206B0087; Fri, 22 Mar 2024 20:13:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0EB746B0088; Fri, 22 Mar 2024 20:13:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ECE046B0089; Fri, 22 Mar 2024 20:13:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id DED306B0087 for ; Fri, 22 Mar 2024 20:13:26 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 9B9441A0CE1 for ; Sat, 23 Mar 2024 00:13:26 +0000 (UTC) X-FDA: 81926379612.26.CC485DD Received: from mail-ej1-f46.google.com (mail-ej1-f46.google.com [209.85.218.46]) by imf25.hostedemail.com (Postfix) with ESMTP id DE908A001F for ; Sat, 23 Mar 2024 00:13:23 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=QelHji7Q; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf25.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.46 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711152804; a=rsa-sha256; cv=none; b=zHnmrUV+Xayx2A2UbFJN7I0ApQEctkQmMKc9JVYaSLXMKNh1JHbQ8jkrRqTEzeZ8Yf8fJx 34jygkuWSbdj6pdoRmF04F8MFwMmAHi+Bozimx+dlMrBC9v87dk1AY1k6UYKGWawyjhXkY KO2S5x17oj1XKY4+zEyXfV6KZ0qVVUc= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=QelHji7Q; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf25.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.46 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711152804; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NTnm6OaJtn/bHPEoB83FQ0QIm/hcDEj2s2zfrDk8C0s=; b=MKz46OeFBUXN1XlgxTKwq89bsTvC0/LgEhcUOVPAkV4/RtcbBw58UIgRfmo7F455/MEGF/ /5gjeE9FeFjukXdsZTt8fXbbant/86kNS6eWM8lWafXCNy6fHQKvOc5We6HIvPZaEtf5e5 2hldC18CYB0eAOHNoLp9krMQDeHL4rk= Received: by mail-ej1-f46.google.com with SMTP id a640c23a62f3a-a466fc8fcccso361871166b.1 for ; Fri, 22 Mar 2024 17:13:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1711152802; x=1711757602; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=NTnm6OaJtn/bHPEoB83FQ0QIm/hcDEj2s2zfrDk8C0s=; b=QelHji7Qg88V5zjJHT/oiP51SOeJC4Mp7lFZ8UqmoCOHUSikbza51E0YShJeotCaAn aNVDIr31swboyu/bzDlw24KcAV9f2KCRA4tpWsseqYk+oZt7q4QUk0AruMh/3chqHyHP lgDRpJU7KHMXkJ4DVWb8DxiGuklemOlIgIGolsSboUhgky1cbSTkqgpMold/4ri8L2bm izIkQVLyFLDJhRZ8gH9UjEmcsH7FW3/Wp/EllEDPuScRX1E9F6G76ozbpDTC/S/lr1Jn CTkprJEvE9baMvPyhfnb8h8Vv8ULValqc9LXIiX+djEiO0LIafzpRvz8qmDFdCdGUreL ukuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711152802; x=1711757602; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NTnm6OaJtn/bHPEoB83FQ0QIm/hcDEj2s2zfrDk8C0s=; b=A1/pkckfCbIhcR5/8IJ+NkxDhM4w54ylXmt7JuPY6f9Psk8VOtluM9jcvA985uZWxR AywKQP+E649z24q7coPYEiSFsW3iGXQd+W1phvOQbS8QVPAZTNFnfHmE+8gpIYMarRJG AzHWDa+LgzdVbVkAL/L1tXa9VqRPHINwNUepKLpPmz/tDbCGXjcP7ZqWYTTFpD1wf5e1 XDzBSd43817liGcH1RWoVcBQYcALZIpBQjgIDB7tU8n8pW1xHodq99CLcAmq81wXBlly x7e2KuN2mofChxYYyZBhCig+kE0fGbiJ+/3E5R6OlAWBO8TviP2d2JTsTNaBfHq4q+/C fDEQ== X-Forwarded-Encrypted: i=1; AJvYcCU4M1cpG2+GgSTGWu64Zr9fWIUJ/1kw53WZekHwh4MEVKoEVp8NkClwN0s0L02dSVAG7OMWlDZH+iamHyvC2p5xcdA= X-Gm-Message-State: AOJu0YxNIt4Og0vjboMH6oMme+zj+B9YlS3bCmZB3ULrF4GdamD8Agoz muVlSzda+cFox2lNHWqBL/vHqn0krX5mk2M7/gggjalxShg6IkGBhpt91O0Oz3IO/Rt19UJt2cM ozGDLJj3rEref7+mNcYcPDhtS1walEPJ0Jby3 X-Google-Smtp-Source: AGHT+IGwFY4dieKA9vYo5+qSPgsWn9qbdg6n0ZLkIBmwY3AN/ydYAmA4fjRqgZYY7L3I5suOrOsex9mAXAqJC+k5k20= X-Received: by 2002:a17:907:78cf:b0:a46:bbb3:f0d2 with SMTP id kv15-20020a17090778cf00b00a46bbb3f0d2mr799284ejc.47.1711152801925; Fri, 22 Mar 2024 17:13:21 -0700 (PDT) MIME-Version: 1.0 References: <20240322163939.17846-1-chengming.zhou@linux.dev> <20240322234826.GA448621@cmpxchg.org> In-Reply-To: <20240322234826.GA448621@cmpxchg.org> From: Yosry Ahmed Date: Fri, 22 Mar 2024 17:12:45 -0700 Message-ID: Subject: Re: [RFC PATCH] mm: add folio in swapcache if swapin from zswap To: Johannes Weiner Cc: Barry Song <21cnbao@gmail.com>, chengming.zhou@linux.dev, nphamcs@gmail.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Zhongkun He Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: DE908A001F X-Stat-Signature: nz44f8odqxwxzodyjcwqek1mwnw6y1ca X-HE-Tag: 1711152803-396937 X-HE-Meta: U2FsdGVkX1++l2Xu9xa4o16PkEKUfkCgnIHw209a/+/L9oYp5x+d+OCtBF2zMXthL0zIPBqBzT/z7vRW2rBrtKMpaLklWjZDdXBmstnvtm50AX/USGnYfS3eMdoaWrmL5PUKGbt33fRjkFWQayCmaW2l5f/XkJXOV4fh6h+DV3unX3Z2lngmvJoLd+oTXvEsXJMNctqQEhw9oOIVikH4UzVPxDcgXPmaNQYT3ksee75EMuqg7GQb8t+nr1PtrkkpZMgQK0NiG/Bffc2yXavoz4u0BbDJn7srWuRQy5T8gBUdpfWxL54eFJXCmroHEsfffKI1DmTw/hEJ03xPAYTU0Fd1zGNK7YSkG4gBnu5INjDP0JncRqs+7TvY7RAPs+yjuBW7t3Bb1ZI42y7RUHgJwCQ+3HFtH5Bq696bFx9wgRUKMBoBfbrNwoDyB6cTaznX8xoIvgg9NtwgJd6ht7eJhXCEIP3XtS4EzlgOoXW5q9rKM0a+vd0Rgc0TK4WoAXpUbvISwxwEWeflfh9NLk68sjm9Nx5cF16cVZlehXEwYt6APHlLrthsPzNK7M3z7+sh5VulTHphQq9Q00sobel5hy+5OfBiZjQbwjjXP5J10GpFg2XS5MYwGakjlcBEC+t+cfZTcV42RCTZWqJtRdQa6ET7kp2DvhVybX2BqPZB2sWWn+Nyqb4nEMYoHwMQBjUbSmiN0o/Z3RmHqCxwnYAdkfrIBzx9nwexh3Qe93+O4TqxRf3HcoRkNq6W5ZAtQbhXh08XHOf8MW3ynH4THpMoZnHByDn8RfYSZ8Tzc4QkIR/pwsWyRE2ialVFjQLjAGRsDXtxIbFYA8F39EXQr1VmL3eYxgH0C2StyEXqllZdbiVBIPbIqlZ2NV7nD0UWvcHeBnrv/pco6BL5LtUcRcCxbbxXBN0AxT04I216gRaeodQoR5GYrK3/aCP5LMwJ6u1j08tVZNs4ykj18HiRD7r uvw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Mar 22, 2024 at 4:48=E2=80=AFPM Johannes Weiner wrote: > > On Fri, Mar 22, 2024 at 10:33:13PM +0000, Yosry Ahmed wrote: > > On Sat, Mar 23, 2024 at 10:41:32AM +1300, Barry Song wrote: > > > On Sat, Mar 23, 2024 at 8:38=E2=80=AFAM Yosry Ahmed wrote: > > > > > > > > On Fri, Mar 22, 2024 at 9:40=E2=80=AFAM = wrote: > > > > > > > > > > From: Chengming Zhou > > > > > > > > > > There is a report of data corruption caused by double swapin, whi= ch is > > > > > only possible in the skip swapcache path on SWP_SYNCHRONOUS_IO ba= ckends. > > > > > > > > > > The root cause is that zswap is not like other "normal" swap back= ends, > > > > > it won't keep the copy of data after the first time of swapin. So= if > > > > > > I don't quite understand this, so once we load a page from zswap, zsw= ap > > > will free it even though do_swap_page might not set it to PTE? > > > > > > shouldn't zswap free the memory after notify_free just like zram? > > > > It's an optimization that zswap has, exclusive loads. After a page is > > swapped in it can stick around in the swapcache for a while. In this > > case, there would be two copies in memory with zram (compressed and > > uncompressed). Zswap implements exclusive loads to drop the compressed > > copy. The folio is marked as dirty so that any attempts to reclaim it > > cause a new write (compression) to zswap. It is also for a lot of > > cleanups and straightforward entry lifetime tracking in zswap. > > > > It is mostly fine, the problem here happens because we skip the > > swapcache during swapin, so there is a possibility that we load the > > folio from zswap then just drop it without stashing it anywhere. > > > > > > > > > > the folio in the first time of swapin can't be installed in the p= agetable > > > > > successfully and we just free it directly. Then in the second tim= e of > > > > > swapin, we can't find anything in zswap and read wrong data from = swapfile, > > > > > so this data corruption problem happened. > > > > > > > > > > We can fix it by always adding the folio into swapcache if we kno= w the > > > > > pinned swap entry can be found in zswap, so it won't get freed ev= en though > > > > > it can't be installed successfully in the first time of swapin. > > > > > > > > A concurrent faulting thread could have already checked the swapcac= he > > > > before we add the folio to it, right? In this case, that thread wil= l > > > > go ahead and call swap_read_folio() anyway. > > > > > > > > Also, I suspect the zswap lookup might hurt performance. Would it b= e > > > > better to add the folio back to zswap upon failure? This should be > > > > detectable by checking if the folio is dirty as I mentioned in the = bug > > > > report thread. > > > > > > I don't like the idea either as sync-io is the fast path for zram etc= . > > > or, can we use > > > the way of zram to free compressed data? > > > > I don't think we want to stop doing exclusive loads in zswap due to thi= s > > interaction with zram, which shouldn't be common. > > > > I think we can solve this by just writing the folio back to zswap upon > > failure as I mentioned. > > Instead of storing again, can we avoid invalidating the entry in the > first place if the load is not "exclusive"? > > The reason for exclusive loads is that the ownership is transferred to > the swapcache, so there is no point in keeping our copy. With an > optimistic read that doesn't transfer ownership, this doesn't > apply. And we can easily tell inside zswap_load() if we're dealing > with a swapcache read or not by testing the folio. > > The synchronous read already has to pin the swp_entry_t to be safe, > using swapcache_prepare(). That blocks __read_swap_cache_async() which > means no other (exclusive) loads and no invalidates can occur. > > The zswap entry is freed during the regular swap_free() path, which > the sync fault calls on success. Otherwise we keep it. I thought about this, but I was particularly worried about the need to bring back the refcount that was removed when we switched to only supporting exclusive loads: https://lore.kernel.org/lkml/20240201-b4-zswap-invalidate-entry-v2-6-99d408= 4260a0@bytedance.com/ It seems to be that we don't need it, because swap_free() will free the entry as you mentioned before anyone else has the chance to load it or invalidate it. Writeback used to grab a reference as well, but it removes the entry from the tree anyway and takes full ownership of it then frees it, so that should be okay. It makes me nervous though to be honest. For example, not long ago swap_free() didn't call zswap_invalidate() directly (used to happen to swap slots cache draining). Without it, a subsequent load could race with writeback without refcount protection, right? We would need to make sure to backport 0827a1fb143f ("mm/zswap: invalidate zswap entry when swap entry free") with the fix to stable for instance. I can't find a problem with your diff, but it just makes me nervous to have non-exclusive loads without a refcount. > > diff --git a/mm/zswap.c b/mm/zswap.c > index 535c907345e0..686364a6dd86 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -1622,6 +1622,7 @@ bool zswap_load(struct folio *folio) > swp_entry_t swp =3D folio->swap; > pgoff_t offset =3D swp_offset(swp); > struct page *page =3D &folio->page; > + bool swapcache =3D folio_test_swapcache(folio); > struct zswap_tree *tree =3D swap_zswap_tree(swp); > struct zswap_entry *entry; > u8 *dst; > @@ -1634,7 +1635,8 @@ bool zswap_load(struct folio *folio) > spin_unlock(&tree->lock); > return false; > } > - zswap_rb_erase(&tree->rbroot, entry); > + if (swapcache) > + zswap_rb_erase(&tree->rbroot, entry); > spin_unlock(&tree->lock); > > if (entry->length) > @@ -1649,9 +1651,10 @@ bool zswap_load(struct folio *folio) > if (entry->objcg) > count_objcg_event(entry->objcg, ZSWPIN); > > - zswap_entry_free(entry); > - > - folio_mark_dirty(folio); > + if (swapcache) { > + zswap_entry_free(entry); > + folio_mark_dirty(folio); > + } > > return true; > }