From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83B81C433EF for ; Wed, 16 Mar 2022 21:37:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 166588D0001; Wed, 16 Mar 2022 17:37:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1159F6B0072; Wed, 16 Mar 2022 17:37:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EF7F98D0001; Wed, 16 Mar 2022 17:37:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id DC9536B0071 for ; Wed, 16 Mar 2022 17:37:19 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 9FC2E20603 for ; Wed, 16 Mar 2022 21:37:19 +0000 (UTC) X-FDA: 79251560598.06.59AD77A Received: from mail-vs1-f49.google.com (mail-vs1-f49.google.com [209.85.217.49]) by imf18.hostedemail.com (Postfix) with ESMTP id ABE101C001D for ; Wed, 16 Mar 2022 21:37:18 +0000 (UTC) Received: by mail-vs1-f49.google.com with SMTP id y20so3642980vsy.2 for ; Wed, 16 Mar 2022 14:37:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=R2124ojqHQ17VIC93b3HGTmLk/RUApcZD8WLRKykmFU=; b=oO9Rw/zYoCrUFci0Cd+I2trOyvdj+YmSt75GiyEX/Czqum8l0pNZJ+61CB/Dk1qgBO kA+g/RH73xQCSPt1Tuz0yZ8ACQvHfHJtM1amcTSikfM0F8yeyONAybG2ZfhgU9wY0WIw anIL2b1mT9jOcrRj0WEuxpHbxujCPNCHrHlwyqXmO468yiiz3qnRyyor2r1zT0uDm4ir 2n+cwnH6mHR10PGhfq0koH5xywT7HATReSQMpZxfFA5E5ZRs6zuPodKu1DJOGw+u37Ik MBgK3xeqiHImVn89y/8qGfWU7moXbiNuV/pwZ5omwZ1SZ/c5MCyKWdP0DvvIn2/9PCjU HlLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=R2124ojqHQ17VIC93b3HGTmLk/RUApcZD8WLRKykmFU=; b=Upjw2HaEHfY7JfRE1cU1v0abQJUw8J0PSVhCXA/LTV85nbPLU2moPcOkJF/is6X9bx s//P9w3ORAyz/ugDLMF+GQxzxDVWxvi/ztsKDZEj9xiDYL8B4uyKEZKMwxpLArF+jX3f ATVBnmzWucxvM6GmS9kGoL3TMfMVzGki+4Kb03jZ55YfGpRK3gG3sm8ijZ+b6ngJzTnd 8IB9FSNqF2SgBvG+HMwdCv12Kz6KdjKn0i9jOoNnbur6nrqZU8ztiLvg08CdB8E673zA tasefIG8N/NRoEyjdrWA3HZmXdov8bMKBOnLLcQfh9fdJqgyC0LVxfYbVyhpRM1za+eN m7Hw== X-Gm-Message-State: AOAM533iCVGYHaltsnirsNMggBT+gikyWi8zn71PtaLoByGYwzgTZwgt riMzHcSaGhfPGzNt4iqTSamLRVnVmcMicmQ605aKmA== X-Google-Smtp-Source: ABdhPJyoibWhp7j3cyGKGuapzVbp9pPfcuKW3yK+aST9VYeNuLH/r3B4PAUaweFbHhL4W735Lq/WmHWVDQsa4/G/w8k= X-Received: by 2002:a05:6102:f0c:b0:320:9156:732f with SMTP id v12-20020a0561020f0c00b003209156732fmr782836vss.6.1647466637814; Wed, 16 Mar 2022 14:37:17 -0700 (PDT) MIME-Version: 1.0 References: <20220314233812.9011-1-21cnbao@gmail.com> In-Reply-To: From: Yu Zhao Date: Wed, 16 Mar 2022 15:37:06 -0600 Message-ID: Subject: Re: [PATCH v7 04/12] mm: multigenerational LRU: groundwork To: Barry Song <21cnbao@gmail.com> Cc: Konstantin Kharlamov , Michael Larabel , Andi Kleen , Andrew Morton , "Aneesh Kumar K . V" , Jens Axboe , Brian Geffon , Catalin Marinas , Jonathan Corbet , Donald Carr , Dave Hansen , Daniel Byrne , Johannes Weiner , Hillf Danton , Jan Alexander Steffens , =?UTF-8?Q?Holger_Hoffst=C3=A4tte?= , Jesse Barnes , Linux ARM , "open list:DOCUMENTATION" , linux-kernel , Linux-MM , Mel Gorman , Michal Hocko , Oleksandr Natalenko , Kernel Page Reclaim v2 , Rik van Riel , Mike Rapoport , Sofia Trinh , Steven Barrett , Suleiman Souhlal , Shuang Zhai , Linus Torvalds , Vlastimil Babka , Will Deacon , Matthew Wilcox , "the arch/x86 maintainers" , Huang Ying Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: ABE101C001D X-Rspam-User: Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="oO9Rw/zY"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf18.hostedemail.com: domain of yuzhao@google.com designates 209.85.217.49 as permitted sender) smtp.mailfrom=yuzhao@google.com X-Stat-Signature: s4d8uwstnoo6iuj9fgk8rw9sse187tfn X-Rspamd-Server: rspam04 X-HE-Tag: 1647466638-383193 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Mar 16, 2022 at 12:06 AM Barry Song <21cnbao@gmail.com> wrote: < snipped> > > The cost is not the point; the fairness is: > > > > 1) Ramdisk is fair to both LRU algorithms. > > 2) Zram punishes the LRU algorithm that chooses incompressible pages. > > IOW, this algorithm needs to compress more pages in order to save the > > same amount of memory. > > I see your point. but my point is that with higher I/O cost to swap > in and swap out pages, more major faults(lower hit ratio) will > contribute to the loss of final performance. > > So for the particular case, if we move to a real disk as a swap > device, we might see the same result as zRAM I was using > since you also reported more page faults. If we wanted to talk about I/O cost, we would need to consider the number of writes and writeback patterns as well. The LRU algorithm that *unconsciously* picks more clean pages has an advantage because writes are usually slower than reads. Similarly, the LRU algorithm that *unconsciously* picks a cluster of cold pages that later would be faulted in together also has the advantage because sequential reads are faster than random reads. Do we want to go into this rabbit hole? I think not. That's exactly why I suggested we focus on the fairness. But, just outta curiosity, MGLRU was faster when swapping to a slow MMC disk. # mmc cid read /sys/class/mmc_host/mmc1/mmc1:0001 type: 'MMC' manufacturer: 'SanDisk-Toshiba Corporation' '' product: 'DA4064' 1.24400152 serial: 0x00000000 manfacturing date: 2006 aug # baseline + THP=never 0 records/s real 872.00 s user 51.69 s sys 483.09 s 13.07% __memcpy_neon 11.37% __pi_clear_page 9.35% _raw_spin_unlock_irq 5.52% mod_delayed_work_on 5.17% _raw_spin_unlock_irqrestore 3.95% do_raw_spin_lock 3.87% rmqueue_pcplist 3.60% local_daif_restore 3.17% free_unref_page_list 2.74% zap_pte_range 2.00% handle_mm_fault 1.19% do_anonymous_page # MGLRU + THP=never 0 records/s real 821.00 s user 44.45 s sys 428.21 s 13.28% __memcpy_neon 12.78% __pi_clear_page 9.14% _raw_spin_unlock_irq 5.95% _raw_spin_unlock_irqrestore 5.08% mod_delayed_work_on 4.45% do_raw_spin_lock 3.86% local_daif_restore 3.81% rmqueue_pcplist 3.32% free_unref_page_list 2.89% zap_pte_range 1.89% handle_mm_fault 1.10% do_anonymous_page # baseline + THP=madvise 0 records/s real 1341.00 s user 68.15 s sys 681.42 s 12.33% __memcpy_neon 11.78% _raw_spin_unlock_irq 8.79% __pi_clear_page 7.63% mod_delayed_work_on 5.49% _raw_spin_unlock_irqrestore 3.23% local_daif_restore 3.00% do_raw_spin_lock 2.83% rmqueue_pcplist 2.21% handle_mm_fault 2.00% zap_pte_range 1.51% free_unref_page_list 1.33% do_swap_page 1.17% do_anonymous_page # MGLRU + THP=madvise 0 records/s real 1315.00 s user 60.59 s sys 620.56 s 12.34% __memcpy_neon 12.17% _raw_spin_unlock_irq 9.33% __pi_clear_page 7.33% mod_delayed_work_on 6.01% _raw_spin_unlock_irqrestore 3.27% local_daif_restore 3.23% do_raw_spin_lock 2.98% rmqueue_pcplist 2.12% handle_mm_fault 2.04% zap_pte_range 1.65% free_unref_page_list 1.27% do_swap_page 1.11% do_anonymous_page