From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A924C433F5 for ; Tue, 24 May 2022 08:12:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231573AbiEXIM0 (ORCPT ); Tue, 24 May 2022 04:12:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43674 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231636AbiEXIMV (ORCPT ); Tue, 24 May 2022 04:12:21 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0830660BBB for ; Tue, 24 May 2022 01:12:17 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id ED0CE6153B for ; Tue, 24 May 2022 08:12:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6214CC385AA; Tue, 24 May 2022 08:12:13 +0000 (UTC) Date: Tue, 24 May 2022 09:12:09 +0100 From: Catalin Marinas To: Barry Song <21cnbao@gmail.com> Cc: akpm@linux-foundation.org, will@kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, hanchuanhua@oppo.com, zhangshiming@oppo.com, guojian@oppo.com, Barry Song , "Huang, Ying" , Minchan Kim , Johannes Weiner , Hugh Dickins , Shaohua Li , Rik van Riel , Andrea Arcangeli , Steven Price Subject: Re: [PATCH] arm64: enable THP_SWAP for arm64 Message-ID: References: <20220524071403.128644-1-21cnbao@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220524071403.128644-1-21cnbao@gmail.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 24, 2022 at 07:14:03PM +1200, Barry Song wrote: > From: Barry Song > > THP_SWAP has been proved to improve the swap throughput significantly > on x86_64 according to commit bd4c82c22c367e ("mm, THP, swap: delay > splitting THP after swapped out"). > As long as arm64 uses 4K page size, it is quite similar with x86_64 > by having 2MB PMD THP. So we are going to get similar improvement. > For other page sizes such as 16KB and 64KB, PMD might be too large. > Negative side effects such as IO latency might be a problem. Thus, > we can only safely enable the counterpart of X86_64. > > Cc: "Huang, Ying" > Cc: Minchan Kim > Cc: Johannes Weiner > Cc: Hugh Dickins > Cc: Shaohua Li > Cc: Rik van Riel > Cc: Andrea Arcangeli > Signed-off-by: Barry Song > --- > arch/arm64/Kconfig | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index d550f5acfaf3..8e3771c56fbf 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -98,6 +98,7 @@ config ARM64 > select ARCH_WANT_HUGE_PMD_SHARE if ARM64_4K_PAGES || (ARM64_16K_PAGES && !ARM64_VA_BITS_36) > select ARCH_WANT_LD_ORPHAN_WARN > select ARCH_WANTS_NO_INSTR > + select ARCH_WANTS_THP_SWAP if ARM64_4K_PAGES I'm not opposed to this but I think it would break pages mapped with PROT_MTE. We have an assumption in mte_sync_tags() that compound pages are not swapped out (or in). With MTE, we store the tags in a slab object (128-bytes per swapped page) and restore them when pages are swapped in. At some point we may teach the core swap code about such metadata but in the meantime that was the easiest way. -- Catalin From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 01858C433EF for ; Tue, 24 May 2022 08:13:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lyja6BnM1IgNwt7uYj4hfXGlnNwm8g1wPf3cZ9J56m0=; b=igdwUvXJcGhHMn Rv6xLXTpMUiUVK0j1f4yCECMT3OMgKDyVf/lM1mErN+MTLiFmP4cndHIiYBG3McOHTqD/hqVA34RQ rActJPiBtbF+t4PVrx91dl03iuSOa+fbV4kk+2NE/tZusS6ZUbj5dqb5C9IcUzzdrnWTuF/XdQ8Ql GhgrbeO7gkY4reRVRbkQY1FWe6yfbnX+IFKJoeXD0AMG9Ou1lyucHABR0Ch59YXl4HV8Bg5pCqj5Y 8xyCYuY35ciuih2j+v2q5SGyvd1TBUtNP9E4ldrEB2iOjj3b3O+QMGCUbJeQ+QkTtXxWhwwc/CsmP vqSq1mUt3y6APkodgc/A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ntPeR-007Czs-2I; Tue, 24 May 2022 08:12:23 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ntPeN-007Cy2-Mq for linux-arm-kernel@lists.infradead.org; Tue, 24 May 2022 08:12:21 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id B0533B817F7; Tue, 24 May 2022 08:12:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6214CC385AA; Tue, 24 May 2022 08:12:13 +0000 (UTC) Date: Tue, 24 May 2022 09:12:09 +0100 From: Catalin Marinas To: Barry Song <21cnbao@gmail.com> Cc: akpm@linux-foundation.org, will@kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, hanchuanhua@oppo.com, zhangshiming@oppo.com, guojian@oppo.com, Barry Song , "Huang, Ying" , Minchan Kim , Johannes Weiner , Hugh Dickins , Shaohua Li , Rik van Riel , Andrea Arcangeli , Steven Price Subject: Re: [PATCH] arm64: enable THP_SWAP for arm64 Message-ID: References: <20220524071403.128644-1-21cnbao@gmail.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20220524071403.128644-1-21cnbao@gmail.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220524_011219_953975_03192C14 X-CRM114-Status: GOOD ( 19.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, May 24, 2022 at 07:14:03PM +1200, Barry Song wrote: > From: Barry Song > > THP_SWAP has been proved to improve the swap throughput significantly > on x86_64 according to commit bd4c82c22c367e ("mm, THP, swap: delay > splitting THP after swapped out"). > As long as arm64 uses 4K page size, it is quite similar with x86_64 > by having 2MB PMD THP. So we are going to get similar improvement. > For other page sizes such as 16KB and 64KB, PMD might be too large. > Negative side effects such as IO latency might be a problem. Thus, > we can only safely enable the counterpart of X86_64. > > Cc: "Huang, Ying" > Cc: Minchan Kim > Cc: Johannes Weiner > Cc: Hugh Dickins > Cc: Shaohua Li > Cc: Rik van Riel > Cc: Andrea Arcangeli > Signed-off-by: Barry Song > --- > arch/arm64/Kconfig | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index d550f5acfaf3..8e3771c56fbf 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -98,6 +98,7 @@ config ARM64 > select ARCH_WANT_HUGE_PMD_SHARE if ARM64_4K_PAGES || (ARM64_16K_PAGES && !ARM64_VA_BITS_36) > select ARCH_WANT_LD_ORPHAN_WARN > select ARCH_WANTS_NO_INSTR > + select ARCH_WANTS_THP_SWAP if ARM64_4K_PAGES I'm not opposed to this but I think it would break pages mapped with PROT_MTE. We have an assumption in mte_sync_tags() that compound pages are not swapped out (or in). With MTE, we store the tags in a slab object (128-bytes per swapped page) and restore them when pages are swapped in. At some point we may teach the core swap code about such metadata but in the meantime that was the easiest way. -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel