From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D32D1C6FD1C for ; Fri, 24 Mar 2023 02:38:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3359F6B0072; Thu, 23 Mar 2023 22:38:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E5F76B0074; Thu, 23 Mar 2023 22:38:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1ACF66B0075; Thu, 23 Mar 2023 22:38:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 0AA8B6B0072 for ; Thu, 23 Mar 2023 22:38:40 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id C1FB8A0897 for ; Fri, 24 Mar 2023 02:38:39 +0000 (UTC) X-FDA: 80602233558.26.09913B3 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf01.hostedemail.com (Postfix) with ESMTP id 9327D40006 for ; Fri, 24 Mar 2023 02:38:36 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Jd26r18Q; spf=pass (imf01.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679625516; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=J6Vg7avjUoLG29rJDillNhUZKQQMQjfvuGBL8hB7EyU=; b=UrcfQlR8mgDLK+/KUfGhpaDMVUHl2GC3/xTN/7ctdKgmbk/hI9TM5PMnhruIBWa/+w9vJT BFqkvXizSxWrfCOzQ1vEIXW2UUw1lI+sQPoHFCkQLW1lrokyT/hTS5wuD+yd0O8YEkzW9P 8NyLhWtOaj144OY1XhvaP+jk+MLK7N4= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Jd26r18Q; spf=pass (imf01.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679625516; a=rsa-sha256; cv=none; b=dMPVyO1C+RZntzrBaekb48bNgpR2+m9ZpdIoR7VCA4TDWhYjTvm1tLZ1xLsIN+FNHPlL5R scLdMy3uORk1zYa3i7YKVt7i2+rSBoEG0e0f6VxShZNpAUaqgVfa3Zg6vv92GACBOvfLHy LoPZyQ3zk8cp+W/lgA9lb4DqZgf4F0s= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679625516; x=1711161516; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version:content-transfer-encoding; bh=ssfBHemT8Qct/XSHNidLoCjNrTNn4JoqGn5EfNIuldA=; b=Jd26r18QsZ6s7xJfsiqkF/zVND+iXrnJpP4c/Kc6Xd3lRXwXMeapz5dq QSVU9XKpYUTTbRHlN6klgUB2jC0yewKy8HQKnDfcbskfEonhuUnsX/F2e nzeuBJnMrzeiYTfJzx/9HLycz/F1WVb3PidfEpAFdhoqD6diMzqduCk+B ZPcJuSz+ygnlt/x+wuOxBHbp9feR8UQFsdrerX7ZKDO+/td8X735RPKSF dIZ/A8oGsNcT4QjDl8Z9b8RA/1Gp6wYqFRN7Ojj+evU5/p0c/IQ4/38kO K0ETS8GVEmqJoJif+QmlCnnFl+7emJ9LFxyKZy+IRs0TaEcGZsIdQd0RX g==; X-IronPort-AV: E=McAfee;i="6600,9927,10658"; a="341241164" X-IronPort-AV: E=Sophos;i="5.98,286,1673942400"; d="scan'208";a="341241164" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2023 19:38:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10658"; a="682519969" X-IronPort-AV: E=Sophos;i="5.98,286,1673942400"; d="scan'208";a="682519969" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2023 19:38:30 -0700 From: "Huang, Ying" To: Yosry Ahmed Cc: Chris Li , lsf-pc@lists.linux-foundation.org, Johannes Weiner , Linux-MM , Michal Hocko , Shakeel Butt , David Rientjes , Hugh Dickins , Seth Jennings , Dan Streetman , Vitaly Wool , Yang Shi , Peter Xu , Minchan Kim , Andrew Morton , Aneesh Kumar K V , Michal Hocko , Wei Xu Subject: Re: [LSF/MM/BPF TOPIC] Swap Abstraction / Native Zswap References: <87356e850j.fsf@yhuang6-desk2.ccr.corp.intel.com> <87y1o571aa.fsf@yhuang6-desk2.ccr.corp.intel.com> <87o7ox762m.fsf@yhuang6-desk2.ccr.corp.intel.com> <87bkkt5e4o.fsf@yhuang6-desk2.ccr.corp.intel.com> <87y1ns3zeg.fsf@yhuang6-desk2.ccr.corp.intel.com> <874jqcteyq.fsf@yhuang6-desk2.ccr.corp.intel.com> <87v8isrwck.fsf@yhuang6-desk2.ccr.corp.intel.com> <87bkkkrps8.fsf@yhuang6-desk2.ccr.corp.intel.com> Date: Fri, 24 Mar 2023 10:37:28 +0800 In-Reply-To: (Yosry Ahmed's message of "Thu, 23 Mar 2023 08:18:23 -0700") Message-ID: <87sfduri1j.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: t31bna1ohkmjzyn49d1dfugn3o4wqqfo X-Rspamd-Queue-Id: 9327D40006 X-HE-Tag: 1679625516-357203 X-HE-Meta: U2FsdGVkX19t1FwiaZWu9xlMgn4Xgbcf4+SJG2iTGmq/Y5gqMl5jX7lNSX6KAF+R2JWexvSCeznnDu26nM+CmYHOooh6aRXSDwgWnI+avbIR2x62mYDuKCKY+/Vr+uho7/xiQQN7huRQoJwiCdjREqvCi7mZA4SQPkfSDr18Q41P//y3InYtYDIopuwaqWwtHc+hLyqVneU9mNjD238JUAAz0YlHVhahcYRd0TteDSmSXUKvF4xjqwlFR0H7aRKWIiRaeTE0alrECCH2cybwYasZ0eOqDFyLDl9A3ZZw45nEmEu6Czdq0Zr6tPgXUVQRQl6TkGgjjTQYlzob8T641jHn6thmzg+3VSJNAMLzRhlB5e832/EOsJmmdLPqv97aRUGjD1GG67U/MgsDxgmPl1ibX10DvmmsezVL80UR1Ph3qj+a9YZGR3sqzSKarLhMXafk64E4t8QD8/40HeIlioG1yjGyXU+cS4YAs4tjlHq4cQvsjOS91wGhEydhSAJYGed6Z5xdgTLDZ+5li6RMs4zSwwTk9s7puC7qrqEeR8W0X8E1/4PtEf2iINRXGU6cuAqRPG8+1jNR1CErUHKDTuRzzUOfV9tauwPIs6jBAGBtui8HbYF69wshuIAbe7nBUcsqJ0j9J3OgQ/mQUWor02spPot43kGbckOQunXmt4kKur2f9+x1ZX8rCOu/c3qjgLJ+3KV8lPW7pl2UdV7Z7mfcHw1b0Uulh+a6ZBxIIO+xBVa50xI04ijcDc97A3Ae/njNqr6SAv9hmUjtSTAIjGHMD1J7CwzEM9z5i2TAzRqfvfyyydGn/RRmiEAtPuei2Pf6YvZOHE6zLHTGOGeBSJUhBI51EZ4E3+cskOppAinCbxcFSTxjbz/7iXpnpSZThTMpRY4uT6B8TaP7V4ntMfP5PEJNO7+MIfUJrIiQB90egIp7cbwreflCPHrcEulVOjQPnxKqHUK+I3W5zWy IgRx/C9P 3T+AlwGNQ9gcDQh/le5r8WXEcrRRi/wAFDpzXIugr91to7DTn1JO9CCpveLxjAfi/OuSqEYSoJ+xaFBRVDQaJTHAorHsJ99E+Ch6v6vJ+XBZiR+EgB2Tl/ECs05yk+WthnQ7NSz5m/JkdIofSLTAQMWSJAqVzCY4XOuyHenUkFJniquDFaqylFnm3sY/MgL8R3b/fVtWO4lJUtgXV84/Ce398gg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Yosry Ahmed writes: > On Wed, Mar 22, 2023 at 10:39=E2=80=AFPM Huang, Ying wrote: >> >> Yosry Ahmed writes: >> >> > On Wed, Mar 22, 2023 at 8:17=E2=80=AFPM Huang, Ying wrote: >> >> >> >> Yosry Ahmed writes: >> >> >> >> > On Wed, Mar 22, 2023 at 6:50=E2=80=AFPM Huang, Ying wrote: >> >> >> >> >> >> Yosry Ahmed writes: >> >> >> >> >> >> > On Sun, Mar 19, 2023 at 7:56=E2=80=AFPM Huang, Ying wrote: >> >> >> >> >> >> >> >> Yosry Ahmed writes: >> >> >> >> >> >> >> >> > On Thu, Mar 16, 2023 at 12:51=E2=80=AFAM Huang, Ying wrote: >> >> >> >> >> >> >> >> >> >> Yosry Ahmed writes: >> >> >> >> >> >> >> >> >> >> > On Sun, Mar 12, 2023 at 7:13=E2=80=AFPM Huang, Ying wrote: >> >> >> >> >> >> >> >> >> >> >> >> Yosry Ahmed writes: >> >> >> >> >> >> >> >> [snip] >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> > xarray (b) is indexed by swap id as well >> >> >> >> >> > and contain swap entry or zswap entry. Reverse mapping mig= ht be >> >> >> >> >> > needed. >> >> >> >> >> >> >> >> >> >> Reverse mapping isn't needed. >> >> >> >> > >> >> >> >> > >> >> >> >> > It would be needed if xarray (a) is indexed by the swap id. I= am not >> >> >> >> > sure I understand how it can be indexed by the swap entry if = the >> >> >> >> > indirection is enabled. >> >> >> >> > >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> > In this case we have an extra overhead of 12-16 bytes + 8 = bytes for >> >> >> >> >> > xarray (b) entry + memory overhead from 2nd xarray + rever= se mapping >> >> >> >> >> > where needed. >> >> >> >> >> > >> >> >> >> >> > There is also the extra cpu overhead for an extra lookup i= n certain paths. >> >> >> >> >> > >> >> >> >> >> > Is my analysis correct? If yes, I agree that the original = proposal is >> >> >> >> >> > good if the reverse mapping can be avoided in enough situa= tions, and >> >> >> >> >> > that we should consider such alternatives otherwise. As I = mentioned >> >> >> >> >> > above, I think it comes down to whether we can completely = restrict >> >> >> >> >> > cluster readahead to rotating disks or not -- in which cas= e we need to >> >> >> >> >> > decide what to do for shmem and for anon when vma readahea= d is >> >> >> >> >> > disabled. >> >> >> >> >> >> >> >> >> >> We can even have a minimal indirection implementation. Wher= e, swap >> >> >> >> >> cache and swap_map[] are kept as they ware before, just one = xarray is >> >> >> >> >> added. The xarray is indexed by swap id (or swap_desc index= ) to store >> >> >> >> >> the corresponding swap entry. >> >> >> >> >> >> >> >> >> >> When indirection is disabled, no extra overhead. >> >> >> >> >> >> >> >> >> >> When indirection is enabled, the extra overhead is just 8 by= tes per >> >> >> >> >> swapped page. >> >> >> >> >> >> >> >> >> >> The basic migration support can be build on top of this. >> >> >> >> >> >> >> >> >> >> I think that this could be a baseline for indirection suppor= t. Then >> >> >> >> >> further optimization can be built on top of it step by step = with >> >> >> >> >> supporting data. >> >> >> >> > >> >> >> >> > >> >> >> >> > I am not sure how this works with zswap. Currently swap_map[] >> >> >> >> > implementation is specific for swapfiles, it does not work fo= r zswap >> >> >> >> > unless we implement separate swap counting logic for zswap & >> >> >> >> > swapfiles. Same for the swapcache, it currently supports bein= g indexed >> >> >> >> > by a swap entry, it would need to support being indexed by a = swap id, >> >> >> >> > or have a separate swap cache for zswap. Having separate >> >> >> >> > implementation would add complexity, and we would need to per= form >> >> >> >> > handoffs of the swap count/cache when a page is moved from zs= wap to a >> >> >> >> > swapfile. >> >> >> >> >> >> >> >> We can allocate a swap entry for each swapped page in zswap. >> >> >> > >> >> >> > >> >> >> > This is exactly what the current implementation does and what we= want >> >> >> > to move away from. The current implementation uses zswap as an >> >> >> > in-memory compressed cache on top of an actual swap device, and = each >> >> >> > swapped page in zswap has a swap entry allocated. With this >> >> >> > implementation, zswap cannot be used without a swap device. >> >> >> >> >> >> I totally agree that we should avoid to use an actual swap device = under >> >> >> zswap. And, as an swap implementation, zswap can manage the swap = entry >> >> >> inside zswap without an underlying actual swap device. For exampl= e, >> >> >> when we swap a page to zswap (actually compress), we can allocate a >> >> >> (virtual) swap entry in the zswap. I understand that there's over= head >> >> >> to manage the swap entry in zswap. We can consider how to reduce = the >> >> >> overhead. >> >> > >> >> > I see. So we can (for example) use one of the swap types for zswap, >> >> > and then have zswap code handle this entry according to its >> >> > implementation. We can then have an xarray that maps swap ID -> swap >> >> > entry, and this swap entry is used to index the swap cache and such. >> >> > When a swapped page is moved between backends we update the swap ID= -> >> >> > swap entry xarray. >> >> > >> >> > This is one possible implementation that I thought of (very briefly >> >> > tbh), but it does have its problems: >> >> > For zswap: >> >> > - Managing swap entries inside zswap unnecessarily. >> >> > - We need to maintain a swap entry -> zswap entry mapping in zswap = -- >> >> > similar to the current rbtree, which is something that we can get r= id >> >> > of with the initial proposal if we embed the zswap_entry pointer >> >> > directly in the swap_desc (it can be encoded to avoid breaking the >> >> > abstraction). >> >> > >> >> > For mm/swap in general: >> >> > - When we allocate a swap entry today, we store it in folio->private >> >> > (or page->private), which is used by the unmapping code to be placed >> >> > in the page tables or shmem page cache. With this implementation, we >> >> > need to store the swap ID in page->private instead, which means that >> >> > every time we need to access the swap cache during reclaim/swapout = we >> >> > need to lookup the swap entry first. >> >> > - On the fault path, we need two lookups instead of one (swap ID -> >> >> > swap entry, swap entry -> swap cache), not sure how this affects fa= ult >> >> > latency. >> >> > - Each swap backend will have its own separate implementation of sw= ap >> >> > counting, which is hard to maintain and very error-prone since the >> >> > logic is backend-agnostic. >> >> > - Handing over a page from one swap backend to another includes >> >> > handing over swap cache entries and swap counts, which I imagine wi= ll >> >> > involve considerable synchronization. >> >> > >> >> > Do you have any thoughts on this? >> >> >> >> Yes. I understand there's additional overhead. I have no clear idea >> >> about how to reduce this now. We need to think about that in depth. > > I agree that we need to think deeper about the tradeoff here. It seems > like the extra xarray lookup may not be a huge problem, but there are > other concerns such as having separate implementations of swap > counting that are basically doing the same thing in different ways for > different backends. In fact, I just suggest to use the minimal design on top of the current implementation as the first step. Then, you can improve it step by step. The first step could be the minimal effort to implement indirection layer and moving swapped pages between swap implementations. Based on that, you can build other optimizations, such as pulling swap counting to the swap core. For each step, we can evaluate the gain and cost with data. Anyway, I don't think you can just implement all your final solution in one step. And, I think the minimal design suggested could be a starting point. >> >> >> >> The bottom line is whether this is worse than the current zswap >> >> implementation? >> > >> > It's not just zswap, as I note above, this design would introduce some >> > overheads to the core swapping code as well as long as the indirection >> > layer is active. I am particularly worried about the extra lookups on >> > the fault path. >> >> Maybe you can measure the time for the radix tree lookup? And compare >> it with the total fault time? > > I ran a simple test with perf swapping in a 1G shmem file: > > |--1.91%--swap_cache_get_folio > | | > | --1.32%--__filemap_get_folio > | | > | --0.66%--xas_load > > Seems like the swap cache lookup is ~2%, and < 1% is coming from the > xarray lookup. I am not sure if the lookup time varies a lot with > fragmentation and different access patterns, but it seems like it's > generally not a major contributor to the latency. Thanks for data! >> >> > For zswap, we already have a lookup today, so maintaining swap entry >> > -> zswap entry mapping would not be a regression, but I am not sure >> > about the extra overhead to manage swap entries within zswap. Keep in >> > mind that using swap entries for zswap probably implies having a >> > fixed/max size for zswap (to be able to manage the swap entries >> > efficiently similar to swap devices), which is a limitation that the >> > initial proposal was hoping to overcome. >> >> We have limited bits in PTE, so the max number of zswap entries will be >> limited anyway. And, we don't need to manage swap entries in the same >> way as disks (which need to consider sequential writing etc.). > > Right, the number of bits allowed would impose a maximum on the swap > ID, which would imply a maximum on the number of zswap entries. The > concern is about managing swap entries within zswap. If zswap needs to > keep track of the entries it allocated and the entries that are free, > it needs a data structure to do so (e.g. a bitmap). The size of this > data structure can potentially scale with the maximum number of > entries, so we would want to impose a virtual limit on zswap entries > to limit the size of the data structure. Alternatively, we can have a > dynamic data structure, but this also comes with its complexities. Yes. We will need that. Best Regards, Huang, Ying