From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92CEFC636D7 for ; Tue, 21 Feb 2023 18:56:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 051EF6B0071; Tue, 21 Feb 2023 13:56:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 001E76B0072; Tue, 21 Feb 2023 13:56:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE4636B0073; Tue, 21 Feb 2023 13:56:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id CE3CA6B0071 for ; Tue, 21 Feb 2023 13:56:42 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 7FB7C1A0126 for ; Tue, 21 Feb 2023 18:56:42 +0000 (UTC) X-FDA: 80492205444.06.CE6F9EA Received: from mail-ed1-f53.google.com (mail-ed1-f53.google.com [209.85.208.53]) by imf24.hostedemail.com (Postfix) with ESMTP id A6970180005 for ; Tue, 21 Feb 2023 18:56:40 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=pAI0wY4h; spf=pass (imf24.hostedemail.com: domain of yosryahmed@google.com designates 209.85.208.53 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677005800; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=C2nAnzjr6mx8eKuXKJDWKqTHlDVUisyYF9QrdiCD3Ts=; b=HO3OczaTwWrBMGBrmTxyjjHjxc9L2xtl3y43tnsOXP8XJI/1ToWoDEKkZD1T1FhSXdtcjz eA0begx/tvtFPLz5uZJpPGG6nW3lY8MAqAFWdVdwwZnNCIeNDtA25o06zRYn7kOY6iKRvH 4QRnhLrq0hS6PeXeMqEFJeVR2q0H81k= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=pAI0wY4h; spf=pass (imf24.hostedemail.com: domain of yosryahmed@google.com designates 209.85.208.53 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677005800; a=rsa-sha256; cv=none; b=BJRCjzNSChI9yTRjK35h0dY04PsoNRm35Ntsj+rXrCaYbjAmFllfvFSBDfikyRDb7utGwD YYNnBwdiR2dHcTtKaNSEqzeHLOpz/czdp3CurahlqAGUDqUdSNLVH1KpZOuTLkHOxv4zpC 53YcYg8V8YlMaPc52wTweC1VEfFqkdU= Received: by mail-ed1-f53.google.com with SMTP id h16so21429598edz.10 for ; Tue, 21 Feb 2023 10:56:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=C2nAnzjr6mx8eKuXKJDWKqTHlDVUisyYF9QrdiCD3Ts=; b=pAI0wY4hUiTu43gECpu/e7TKAhtc1zuv9A18OUIjM7HODBuR8+ataDCgFGzaX6pOZh ZbS06jSyEAGTOT7ISPZhT4PIksWdz0UFh5Z3UCHIfGCtmuvdiJZeJPPRCdHPjd9WphTT +T9DCBNuYVyL0Fbkk52euiksSWMN0DRNtVsGby4wdbzbhAvdmH5FL684uIGdGNMLNLzW 89qPS0m3R6xIZqDARO+1ZE3supEGgKJ64+Yiczn20Cd+sQERZEpelKTjsJeOG4+HlkW7 bhigUd+vN6002ZEEPODjAn8aXS7dXSx+68Xk5NnSjlZkRJnRkPUTofDPkwVQtqRu7/U2 Lc6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=C2nAnzjr6mx8eKuXKJDWKqTHlDVUisyYF9QrdiCD3Ts=; b=vCfsHgakaYfJIAzYWpGZIOF+J+qkrzEpox/flQwxu9oaf6CEqZRQvl7buhveOrtsx4 n2bT8Z4omjedwxlcQnPsK2BeLvbYbgn5Mevr3Qj0VbuEd02BXjs2fEbSoCJISTgjszm2 ahGYahQcmlV44hypMGZE1OmNMCLU/XIwwFTXboxf+l71TZeZsozc3KYhbvclwVjooWsN xTr4HmzzHeFoXW119lk8iR9p+f/xuTSbB927igqCSRs1IWR6qjulnx15+9kgRCiS2nj0 dLFC1TvEaN/c1jtjxIIdX+WNV/1NAWLyLQzi3Xitv8Qf3dSaKGSEdkAgiYk2C/f/3OpH L3Iw== X-Gm-Message-State: AO0yUKU0ISBxt8VyfuLq+UXrfiCGoOvFH4tfaivGsjlS2M6IhT9BT4Lv +Hsu22bMVZHiaaeN14C0MmOQfdvmCOmpdelfxqwSfw== X-Google-Smtp-Source: AK7set/qliueWT2/EH11MZZjtaEzqjbuDFGGE9rcb4dm9NEArCg2D/IsSFJJebzcTqdUJNKuNifgkf5F9E9PbGHGJfk= X-Received: by 2002:a50:d547:0:b0:4ab:c702:65a with SMTP id f7-20020a50d547000000b004abc702065amr2475061edj.5.1677005798748; Tue, 21 Feb 2023 10:56:38 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Yosry Ahmed Date: Tue, 21 Feb 2023 10:56:02 -0800 Message-ID: Subject: Re: [LSF/MM/BPF TOPIC] Swap Abstraction / Native Zswap To: Yang Shi Cc: lsf-pc@lists.linux-foundation.org, Johannes Weiner , Linux-MM , Michal Hocko , Shakeel Butt , David Rientjes , Hugh Dickins , Seth Jennings , Dan Streetman , Vitaly Wool , Peter Xu , Minchan Kim , Andrew Morton Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: qcz595ytsn7ks3n3u44dhydmnsxgfe64 X-Rspamd-Queue-Id: A6970180005 X-HE-Tag: 1677005800-852334 X-HE-Meta: U2FsdGVkX18IVWtpSinCvA9vqVLaUud8jPBFprka/ItzcN8Ip18G3J85/5NeaI5lE38d6MKWykFDi6Y8E57hrhCOmYt0+XunRJO3L7Z2I4AptjyQn8dAmrnVxM3hyxJqfm0jf9gN+CpGUSGK//IR72IsdZwK3/Kpln3jA5WpxXK5W9O+mcNeBX8waX1aouFAItKmqIAEzSzNtbCDM26gw5Nsv2oxInGuW/cfCx/Rvumq+8/+19Hzuzeyd3lMdOK1ODNLU8cey+tiCrqh+DtwEfsTGLjkQPbiDMdNrqWdRfRi9sslYj8ZGjZ1QP8HQYJNjfX4cO1zJiziRiAOfwIfEwfWpo+Kf2/GsDebFfc1v3FT0o//8ur3aL5CgcTBGcq/rBgBVMk/OWUs6og/Dv8i7W5Gkjrw8sj2UCmvjwFD6Sv2jESYpXgCmT0I1WaglLcpPHsT+qYUG7A9zR+r3e/GzvsJPZ0fKpLArDeLQuEq8A4WxbwLaqeaYgYwji18JDkO5fzStGj028kYsbW8fRv4btX0RWmC2gv+plmxMWajAWR3yUGOjDD1oKT3Qq9V1NYQDyzSS7khfzwhIq08vutw9ouxRLnbdHY5RJYlouhkEYBlzsC3NC5FZvYrrYTjGDmRipQnqJRZMZX64I1xd6gc/XgNWtEVvuynhs1/KtKvdSEQn68Vk5z8vYec0ByvvzuQy+drqCbTud+U2e4j7jTWu/OTq/r5VHDjVmq2glu+pK8Nf8fAIjBIYNhReWO8/hJZoCqHetryIKY4m++23Am6FayTIuUqDZHDPICIxapFP3isrQb0ZqOaXZU9To2ZpxB7UK6cp0AdfI8VoqYpoSYPBgWyMFkgKkER1Mj1j9r8b4JNSPjoULgxPYDoaEW9e3DLqLo6gIDATzsS1oWg6mb+f74rOKT/DX8O/SyvJpS3Jx+0s3xzp504kVeGe5/lW4c0MHAzKswXPENufFNjcqd L5ac13s4 E38MsMKWOJRZzRhBraatI/tP1mcAx6YmaCMuDxMIYWYHfpprySIVTLeWBzvIToKOCDDbalWGC5s1knIE4BUe9pG1se4p6iKGQaOc8PtdVte2s1cp6zv8hfJXIOzuIScKm2FxLs1A+Ea3HGbZOTO+W9De0Ahh0l07irRod1D3v4RCjJ+t3LLoE9m/EtshAKaxW9eRjJ8P6fhqvh4luqwilOnVcHmEqQK6dCkeWckNXHqWF4V3nBvr4RJDvvnF52YymF4LFVNLhtGWGKyd7/LzCxlvb+W9qH/1WQ40vvVM6T4JDs4w= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Feb 21, 2023 at 10:40 AM Yang Shi wrote: > > Hi Yosry, > > Thanks for proposing this topic. I was thinking about this before but > I didn't make too much progress due to some other distractions, and I > got a couple of follow up questions about your design. Please see the > inline comments below. Great to see interested folks, thanks! > > > On Sat, Feb 18, 2023 at 2:39 PM Yosry Ahmed wrote: > > > > Hello everyone, > > > > I would like to propose a topic for the upcoming LSF/MM/BPF in May > > 2023 about swap & zswap (hope I am not too late). > > > > ==================== Intro ==================== > > Currently, using zswap is dependent on swapfiles in an unnecessary > > way. To use zswap, you need a swapfile configured (even if the space > > will not be used) and zswap is restricted by its size. When pages > > reside in zswap, the corresponding swap entry in the swapfile cannot > > be used, and is essentially wasted. We also go through unnecessary > > code paths when using zswap, such as finding and allocating a swap > > entry on the swapout path, or readahead in the swapin path. I am > > proposing a swapping abstraction layer that would allow us to remove > > zswap's dependency on swapfiles. This can be done by introducing a > > data structure between the actual swapping implementation (swapfiles, > > zswap) and the rest of the MM code. > > > > ==================== Objective ==================== > > Enabling the use of zswap without a backing swapfile, which makes > > zswap useful for a wider variety of use cases. Also, when zswap is > > used with a swapfile, the pages in zswap do not use up space in the > > swapfile, so the overall swapping capacity increases. > > > > ==================== Idea ==================== > > Introduce a data structure, which I currently call a swap_desc, as an > > abstraction layer between swapping implementation and the rest of MM > > code. Page tables & page caches would store a swap id (encoded as a > > swp_entry_t) instead of directly storing the swap entry associated > > with the swapfile. This swap id maps to a struct swap_desc, which acts > > as our abstraction layer. All MM code not concerned with swapping > > details would operate in terms of swap descs. The swap_desc can point > > to either a normal swap entry (associated with a swapfile) or a zswap > > entry. It can also include all non-backend specific operations, such > > as the swapcache (which would be a simple pointer in swap_desc), swap > > counting, etc. It creates a clear, nice abstraction layer between MM > > code and the actual swapping implementation. > > How will the swap_desc be allocated? Dynamically or preallocated? Is > it 1:1 mapped to the swap slots on swap devices (whatever it is > backed, for example, zswap, swap partition, swapfile, etc)? I imagine swap_desc's would be dynamically allocated when we need to swap something out. When allocated, a swap_desc would either point to a zswap_entry (if available), or a swap slot otherwise. In this case, it would be 1:1 mapped to swapped out pages, not the swap slots on devices. I know that it might not be ideal to make allocations on the reclaim path (although it would be a small-ish slab allocation so we might be able to get away with it), but otherwise we would have statically allocated swap_desc's for all swap slots on a swap device, even unused ones, which I imagine is too expensive. Also for things like zswap, it doesn't really make sense to preallocate at all. WDYT? > > > > > ==================== Benefits ==================== > > This work enables using zswap without a backing swapfile and increases > > the swap capacity when zswap is used with a swapfile. It also creates > > a separation that allows us to skip code paths that don't make sense > > in the zswap path (e.g. readahead). We get to drop zswap's rbtree > > which might result in better performance (less lookups, less lock > > contention). > > > > The abstraction layer also opens the door for multiple cleanups (e.g. > > removing swapper address spaces, removing swap count continuation > > code, etc). Another nice cleanup that this work enables would be > > separating the overloaded swp_entry_t into two distinct types: one for > > things that are stored in page tables / caches, and for actual swap > > entries. In the future, we can potentially further optimize how we use > > the bits in the page tables instead of sticking everything into the > > current type/offset format. > > > > Another potential win here can be swapoff, which can be more practical > > by directly scanning all swap_desc's instead of going through page > > tables and shmem page caches. > > > > Overall zswap becomes more accessible and available to a wider range > > of use cases. > > How will you handle zswap writeback? Zswap may writeback to the backed > swap device IIUC. Assuming you have both zswap and swapfile, they are > separate devices with this design, right? If so, is the swapfile still > the writeback target of zswap? And if it is the writeback target, what > if swapfile is full? When we try to writeback from zswap, we try to allocate a swap slot in the swapfile, and switch the swap_desc to point to that instead. The process would be transparent to the rest of MM (page tables, page cache, etc). If the swapfile is full, then there's really nothing we can do, reclaim fails and we start OOMing. I imagine this is the same behavior as today when swap is full, the difference would be that we have to fill both zswap AND the swapfile to get to the OOMing point, so an overall increased swapping capacity. > > Anyway I'm interested in attending the discussion for this topic. Great! Looking forward to discuss this more! > > > > > ==================== Cost ==================== > > The obvious downside of this is added memory overhead, specifically > > for users that use swapfiles without zswap. Instead of paying one byte > > (swap_map) for every potential page in the swapfile (+ swap count > > continuation), we pay the size of the swap_desc for every page that is > > actually in the swapfile, which I am estimating can be roughly around > > 24 bytes or so, so maybe 0.6% of swapped out memory. The overhead only > > scales with pages actually swapped out. For zswap users, it should be > > a win (or at least even) because we get to drop a lot of fields from > > struct zswap_entry (e.g. rbtree, index, etc). > > > > Another potential concern is readahead. With this design, we have no > > way to get a swap_desc given a swap entry (type & offset). We would > > need to maintain a reverse mapping, adding a little bit more overhead, > > or search all swapped out pages instead :). A reverse mapping might > > pump the per-swapped page overhead to ~32 bytes (~0.8% of swapped out > > memory). > > > > ==================== Bottom Line ==================== > > It would be nice to discuss the potential here and the tradeoffs. I > > know that other folks using zswap (or interested in using it) may find > > this very useful. I am sure I am missing some context on why things > > are the way they are, and perhaps some obvious holes in my story. > > Looking forward to discussing this with anyone interested :) > > > > I think Johannes may be interested in attending this discussion, since > > a lot of ideas here are inspired by discussions I had with him :)