From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40D99C4167B for ; Thu, 10 Dec 2020 08:22:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 02A1823D57 for ; Thu, 10 Dec 2020 08:22:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387894AbgLJIWe (ORCPT ); Thu, 10 Dec 2020 03:22:34 -0500 Received: from mga02.intel.com ([134.134.136.20]:14941 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730907AbgLJIWY (ORCPT ); Thu, 10 Dec 2020 03:22:24 -0500 IronPort-SDR: JeOT8PPNqpHaHrRWhDKtscXriLud/+FPhXfo4udLor52ppxnh1y+Po8mf6XcDDfIORd8rGdTYq GuVj+3i110Pw== X-IronPort-AV: E=McAfee;i="6000,8403,9830"; a="161267805" X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="161267805" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2020 00:21:29 -0800 IronPort-SDR: 1R5N6Yw/ROt9D4NEyhUAZTnLuF6700tb8Fo61weRzdRGNyQh0RkUg4k/6/nWQtoSiqJMT+SHJZ husbGZWbep/A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="438251654" Received: from yhuang-dev.sh.intel.com (HELO yhuang-dev) ([10.239.159.50]) by fmsmga001.fm.intel.com with ESMTP; 10 Dec 2020 00:21:26 -0800 From: "Huang\, Ying" To: Peter Zijlstra , Andrew Morton Cc: Mel Gorman , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Ingo Molnar , Rik van Riel , Johannes Weiner , "Matthew Wilcox \(Oracle\)" , Dave Hansen , Andi Kleen , Michal Hocko , David Rientjes , linux-api@vger.kernel.org Subject: Re: [PATCH -V6 RESEND 1/3] numa balancing: Migrate on fault among multiple bound nodes References: <20201202084234.15797-1-ying.huang@intel.com> <20201202084234.15797-2-ying.huang@intel.com> <20201202114054.GV3306@suse.de> <20201203102550.GK2414@hirez.programming.kicks-ass.net> <87zh2ulyhc.fsf@yhuang-dev.intel.com> Date: Thu, 10 Dec 2020 16:21:25 +0800 In-Reply-To: <87zh2ulyhc.fsf@yhuang-dev.intel.com> (Ying Huang's message of "Fri, 04 Dec 2020 17:19:43 +0800") Message-ID: <87a6umjcl6.fsf@yhuang-dev.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org "Huang, Ying" writes: > Peter Zijlstra writes: > >> On Wed, Dec 02, 2020 at 11:40:54AM +0000, Mel Gorman wrote: >>> On Wed, Dec 02, 2020 at 04:42:32PM +0800, Huang Ying wrote: >>> > Now, NUMA balancing can only optimize the page placement among the >>> > NUMA nodes if the default memory policy is used. Because the memory >>> > policy specified explicitly should take precedence. But this seems >>> > too strict in some situations. For example, on a system with 4 NUMA >>> > nodes, if the memory of an application is bound to the node 0 and 1, >>> > NUMA balancing can potentially migrate the pages between the node 0 >>> > and 1 to reduce cross-node accessing without breaking the explicit >>> > memory binding policy. >>> > >>> >>> Ok, I think this part is ok and while the test case is somewhat >>> superficial, it at least demonstrated that the NUMA balancing overhead >>> did not offset any potential benefit >>> >>> Acked-by: Mel Gorman >> >> Who do we expect to merge this, me through tip/sched/core or akpm ? > > Hi, Peter, > > Per my understanding, this is NUMA balancing related, so could go > through your tree. > > BTW: I have just sent -V7 with some small changes per Mel's latest > comments. > > Hi, Andrew, > > Do you agree? So, what's the conclusion here? Both path works for me. I will update 2/3 per Alejandro Colomar's comments. But that's for man-pages only, not for kernel. So, we can merge this one into kernel if you think it's appropriate. Best Regards, Huang, Ying From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C70DC4361B for ; Thu, 10 Dec 2020 08:21:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B759923D58 for ; Thu, 10 Dec 2020 08:21:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B759923D58 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BBE2A6B0036; Thu, 10 Dec 2020 03:21:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B95F46B005D; Thu, 10 Dec 2020 03:21:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AAB3E6B0068; Thu, 10 Dec 2020 03:21:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0206.hostedemail.com [216.40.44.206]) by kanga.kvack.org (Postfix) with ESMTP id A0D6E6B0036 for ; Thu, 10 Dec 2020 03:21:32 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7C4B2181AEF39 for ; Thu, 10 Dec 2020 08:21:32 +0000 (UTC) X-FDA: 77576678424.17.pipe49_48139ca273f6 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 5AFCA180D0180 for ; Thu, 10 Dec 2020 08:21:32 +0000 (UTC) X-HE-Tag: pipe49_48139ca273f6 X-Filterd-Recvd-Size: 3752 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Thu, 10 Dec 2020 08:21:31 +0000 (UTC) IronPort-SDR: PZi85SYkLLPW1NiGA2i3Tyb6eO5+baUVMqs2/pnKql45R5VjBg3FNdHEV8ftiwz50wgrdi13fY sMbETGSDFz8A== X-IronPort-AV: E=McAfee;i="6000,8403,9830"; a="174332434" X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="174332434" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2020 00:21:29 -0800 IronPort-SDR: 1R5N6Yw/ROt9D4NEyhUAZTnLuF6700tb8Fo61weRzdRGNyQh0RkUg4k/6/nWQtoSiqJMT+SHJZ husbGZWbep/A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="438251654" Received: from yhuang-dev.sh.intel.com (HELO yhuang-dev) ([10.239.159.50]) by fmsmga001.fm.intel.com with ESMTP; 10 Dec 2020 00:21:26 -0800 From: "Huang\, Ying" To: Peter Zijlstra , Andrew Morton Cc: Mel Gorman , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Ingo Molnar , Rik van Riel , Johannes Weiner , "Matthew Wilcox \(Oracle\)" , Dave Hansen , Andi Kleen , Michal Hocko , David Rientjes , linux-api@vger.kernel.org Subject: Re: [PATCH -V6 RESEND 1/3] numa balancing: Migrate on fault among multiple bound nodes References: <20201202084234.15797-1-ying.huang@intel.com> <20201202084234.15797-2-ying.huang@intel.com> <20201202114054.GV3306@suse.de> <20201203102550.GK2414@hirez.programming.kicks-ass.net> <87zh2ulyhc.fsf@yhuang-dev.intel.com> Date: Thu, 10 Dec 2020 16:21:25 +0800 In-Reply-To: <87zh2ulyhc.fsf@yhuang-dev.intel.com> (Ying Huang's message of "Fri, 04 Dec 2020 17:19:43 +0800") Message-ID: <87a6umjcl6.fsf@yhuang-dev.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: "Huang, Ying" writes: > Peter Zijlstra writes: > >> On Wed, Dec 02, 2020 at 11:40:54AM +0000, Mel Gorman wrote: >>> On Wed, Dec 02, 2020 at 04:42:32PM +0800, Huang Ying wrote: >>> > Now, NUMA balancing can only optimize the page placement among the >>> > NUMA nodes if the default memory policy is used. Because the memory >>> > policy specified explicitly should take precedence. But this seems >>> > too strict in some situations. For example, on a system with 4 NUMA >>> > nodes, if the memory of an application is bound to the node 0 and 1, >>> > NUMA balancing can potentially migrate the pages between the node 0 >>> > and 1 to reduce cross-node accessing without breaking the explicit >>> > memory binding policy. >>> > >>> >>> Ok, I think this part is ok and while the test case is somewhat >>> superficial, it at least demonstrated that the NUMA balancing overhead >>> did not offset any potential benefit >>> >>> Acked-by: Mel Gorman >> >> Who do we expect to merge this, me through tip/sched/core or akpm ? > > Hi, Peter, > > Per my understanding, this is NUMA balancing related, so could go > through your tree. > > BTW: I have just sent -V7 with some small changes per Mel's latest > comments. > > Hi, Andrew, > > Do you agree? So, what's the conclusion here? Both path works for me. I will update 2/3 per Alejandro Colomar's comments. But that's for man-pages only, not for kernel. So, we can merge this one into kernel if you think it's appropriate. Best Regards, Huang, Ying