From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: ** X-Spam-Status: No, score=2.1 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29B2BC433E0 for ; Tue, 26 May 2020 17:19:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DA01E207FB for ; Tue, 26 May 2020 17:19:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nitingupta.dev header.i=@nitingupta.dev header.b="KFv/DIai" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DA01E207FB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=nitingupta.dev Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 80F89800B6; Tue, 26 May 2020 13:19:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7BFC580010; Tue, 26 May 2020 13:19:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6AE74800B6; Tue, 26 May 2020 13:19:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id 50EFB80010 for ; Tue, 26 May 2020 13:19:32 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 14089181AC9C6 for ; Tue, 26 May 2020 17:19:32 +0000 (UTC) X-FDA: 76859531784.10.toad62_53308abb0c463 X-HE-Tag: toad62_53308abb0c463 X-Filterd-Recvd-Size: 6175 Received: from mail-lf1-f65.google.com (mail-lf1-f65.google.com [209.85.167.65]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Tue, 26 May 2020 17:19:31 +0000 (UTC) Received: by mail-lf1-f65.google.com with SMTP id c12so12774088lfc.10 for ; Tue, 26 May 2020 10:19:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nitingupta.dev; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ZYAvzydmztIdOM9iOnV0qoAQJkKHuwCR8eRNkF3LIT4=; b=KFv/DIai9NeWr+dbbxL3eitHAzyIT1/wQdxIcOXfLbsnKmik7VXFWUY/5f/R6pZLbQ JtqmcOe7qf8ffpRrYKgKSIpi8imrOSy0Tv/qNrXxftX8UquyrPLOGsUaXOcZNc6uq6rz pW8Z/UWWz5fbqRtwo/GVdrC9baUpXbwmCg/X2osiTJ03sNSqTz4gp+EokJNZBL2Yl/6+ EaxA4jsTMNskOkGFLB4znHXSV1OD90MwUm6BSI3wOJ3Zv6dsYMogyZkRkQWWnYu0bxbd kkvfE+lPzdJKmJH1f/u452JBbS4O3u6CYXOj39jGm8u3CKA8X3nWjy3SnvrWUnTlBbLA I0kQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ZYAvzydmztIdOM9iOnV0qoAQJkKHuwCR8eRNkF3LIT4=; b=D5RhXGivs3Wc73pORW3JfwSeiH7goMTPnWebpPsSlccs/OcTFgola7jPcMLnxcpPvq AU6y9gYVVLX1hRsoWbNhJJBD0PcozfltnMEBqh4Gb1oibeeinWUKByjO7pbxlcjLNpsB SnhlIPjtf8sG0uo7NjD86TfP1zbngnoZbZrZQNef1G3X7hP62TYuy3s6RpNI8y0qZQGZ NnwajLc4+MoQI473Wb5DvR9xl+nBRos4BIE+wtgK9GHpX6lTlNH50pVCEV7DMDUYT/gm TVuRBdKKmqiI4hHJMh93TMowvC4vbfA/z9BRizJH6wjJbbYb4jiE0JGl9wbwlK91ZQOt gTrw== X-Gm-Message-State: AOAM532ius5Eo3oQxPJvCe9gq06EFQoQUixhr3xHtG9nUM+PM63Jqn6Z 6qpTjVL49N+u9m7te6iTgVbgZh7zHetndsJ75MeKrA== X-Google-Smtp-Source: ABdhPJzuWzSQJydSI0UxcfsgzujapTPgioMj096DscAhnPaBrU0ohX/JI7dNiV8T+vAbEYsgGEYQWPoj61IdkLFXmbc= X-Received: by 2002:ac2:5314:: with SMTP id c20mr988015lfh.75.1590513569893; Tue, 26 May 2020 10:19:29 -0700 (PDT) MIME-Version: 1.0 References: <20200518181446.25759-1-nigupta@nvidia.com> In-Reply-To: <20200518181446.25759-1-nigupta@nvidia.com> From: Nitin Gupta Date: Tue, 26 May 2020 10:19:18 -0700 Message-ID: Subject: Re: [PATCH v5] mm: Proactive compaction To: Nitin Gupta Cc: Mel Gorman , Michal Hocko , Vlastimil Babka , Matthew Wilcox , Andrew Morton , Mike Kravetz , Joonsoo Kim , David Rientjes , linux-kernel , linux-mm , Linux API Content-Type: multipart/alternative; boundary="00000000000058817c05a6904c1a" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: --00000000000058817c05a6904c1a Content-Type: text/plain; charset="UTF-8" On Mon, May 18, 2020 at 11:14 AM Nitin Gupta wrote: > For some applications, we need to allocate almost all memory as > hugepages. However, on a running system, higher-order allocations can > fail if the memory is fragmented. Linux kernel currently does on-demand > compaction as we request more hugepages, but this style of compaction > incurs very high latency. Experiments with one-time full memory > compaction (followed by hugepage allocations) show that kernel is able > to restore a highly fragmented memory state to a fairly compacted memory > state within <1 sec for a 32G system. Such data suggests that a more > proactive compaction can help us allocate a large fraction of memory as > hugepages keeping allocation latencies low. > > For a more proactive compaction, the approach taken here is to define > a new tunable called 'proactiveness' which dictates bounds for external > fragmentation wrt HUGETLB_PAGE_ORDER order which kcompactd tries to > maintain. > > The tunable is exposed through sysctl: > /proc/sys/vm/compaction_proactiveness > > It takes value in range [0, 100], with a default of 20. > > Ping. Any comments/feedback for this v5 patch? Thanks, Nitin --00000000000058817c05a6904c1a Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


=
On Mon, May 18, 2020 at 11:14 AM Niti= n Gupta <nigupta@nvidia.com>= ; wrote:
For som= e applications, we need to allocate almost all memory as
hugepages. However, on a running system, higher-order allocations can
fail if the memory is fragmented. Linux kernel currently does on-demand
compaction as we request more hugepages, but this style of compaction
incurs very high latency. Experiments with one-time full memory
compaction (followed by hugepage allocations) show that kernel is able
to restore a highly fragmented memory state to a fairly compacted memory state within <1 sec for a 32G system. Such data suggests that a more
proactive compaction can help us allocate a large fraction of memory as
hugepages keeping allocation latencies low.

For a more proactive compaction, the approach taken here is to define
a new tunable called 'proactiveness' which dictates bounds for exte= rnal
fragmentation wrt HUGETLB_PAGE_ORDER order which kcompactd tries to
maintain.

The tunable is exposed through sysctl:
=C2=A0 /proc/sys/vm/compaction_proactiveness

It takes value in range [0, 100], with a default of 20.



Ping.

Any comments/feedback for this v5 patch?

<= br>
Thanks,
Nitin

--00000000000058817c05a6904c1a--