From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: ** X-Spam-Status: No, score=2.1 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3980C2D0F8 for ; Tue, 12 May 2020 23:41:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A229520753 for ; Tue, 12 May 2020 23:41:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nitingupta.dev header.i=@nitingupta.dev header.b="S/5dgss9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A229520753 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=nitingupta.dev Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0F9E09000F8; Tue, 12 May 2020 19:41:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0AA5F9000F3; Tue, 12 May 2020 19:41:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB4479000F8; Tue, 12 May 2020 19:41:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id D0B0A9000F3 for ; Tue, 12 May 2020 19:41:09 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 880D5185D for ; Tue, 12 May 2020 23:41:09 +0000 (UTC) X-FDA: 76809690258.17.bulb46_7e7ce6bc5843f X-HE-Tag: bulb46_7e7ce6bc5843f X-Filterd-Recvd-Size: 6012 Received: from mail-lj1-f196.google.com (mail-lj1-f196.google.com [209.85.208.196]) by imf14.hostedemail.com (Postfix) with ESMTP for ; Tue, 12 May 2020 23:41:09 +0000 (UTC) Received: by mail-lj1-f196.google.com with SMTP id h4so15656376ljg.12 for ; Tue, 12 May 2020 16:41:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nitingupta.dev; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=tQASknxOKzRgOOWRwTBK7iE5ZsNg3ViXoFTIuPN5rMc=; b=S/5dgss9tVvIYHj9UE0LHOgaIuweEzssFL8RAGPAN7MWYWWdPvN0Bas1UNa+iyVSic 0CLQRGIUv5D811z80vbK6no5N10NUbbPLwyOaknsReJg+DhZdjjiW5S5nn6iDAXAW5yp ltBg1pZxUe09mHpJW9PEHSfP3bCAzHU8Ds4y90SWnS/eetniLHRbGtiFZm4rHbVeVcux LwLFXnaeJcHzg5/jjjFRRvyGdAM/nELULaKayc4Xw6xdS47wN2C9BUf7SMjwRN+nKmkU guC95M7Uy9AfrReSEhSQmGvxIx0g53NkTolpXWnQLM46wJHcUkvZ5cdv9NrBoHCDs1+s xfLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=tQASknxOKzRgOOWRwTBK7iE5ZsNg3ViXoFTIuPN5rMc=; b=fx1Cih9Dcgh4/3scjK4Ul4G8Qs30zBuwoY+FpDDw2D4H8yv7dijJISD7s3DE4MPymY uqExypfil5recMYBeRoz1PR7TxcWNY5iSS8Wuj0JzgbnKGLbz1TKnqjem+pED66ICHU2 SpxxgctdRM7pbniX4ZjtXRHkbq5e/08egEZwTRtdiGAs+K7cDeVeaOu526Roiq0+S4IO JnAUwSIdVo0j/vhBItq7ok5gpiih6ndhJ70Up5BUchm3OtEJgm6JPDtEGmOZ0HYD/Jf7 harU2Wrmlr/qGWii4JTFdU876WFUo3y0TOEHLo6Td4QkBIw8kHlqFen0hK85T3E6ztXB 5CEQ== X-Gm-Message-State: AOAM531Rbm/CuK3MmGcegaEiygu1zrDwI4B1DoXmwbkjWlWlT4s+Ezj6 b7bzENp3pZqZv5NYpRgk01j2TJZbdJWr/bNt6E2Adg== X-Google-Smtp-Source: ABdhPJxLSo38nKY0ieyw/05yPv/fGhF1ajUXB27D4F/pzn/nCenLfB/VtmT8KqX7HEH2sKcyxy0YeN64jtv27JgYrx8= X-Received: by 2002:a2e:9b53:: with SMTP id o19mr14865002ljj.75.1589326867276; Tue, 12 May 2020 16:41:07 -0700 (PDT) MIME-Version: 1.0 References: <20200428221055.598-1-nigupta@nvidia.com> In-Reply-To: <20200428221055.598-1-nigupta@nvidia.com> From: Nitin Gupta Date: Tue, 12 May 2020 16:40:56 -0700 Message-ID: Subject: Re: [PATCH v4] mm: Proactive compaction To: Nitin Gupta Cc: Mel Gorman , Michal Hocko , Vlastimil Babka , Matthew Wilcox , Andrew Morton , Mike Kravetz , Joonsoo Kim , David Rientjes , linux-kernel , linux-mm , Linux API Content-Type: multipart/alternative; boundary="0000000000005b41e705a57bff33" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: --0000000000005b41e705a57bff33 Content-Type: text/plain; charset="UTF-8" On Tue, Apr 28, 2020 at 3:11 PM Nitin Gupta wrote: > For some applications, we need to allocate almost all memory as > hugepages. However, on a running system, higher-order allocations can > fail if the memory is fragmented. Linux kernel currently does on-demand > compaction as we request more hugepages, but this style of compaction > incurs very high latency. Experiments with one-time full memory > compaction (followed by hugepage allocations) show that kernel is able > to restore a highly fragmented memory state to a fairly compacted memory > state within <1 sec for a 32G system. Such data suggests that a more > proactive compaction can help us allocate a large fraction of memory as > hugepages keeping allocation latencies low. > > For a more proactive compaction, the approach taken here is to define > a new tunable called 'proactiveness' which dictates bounds for external > fragmentation wrt HUGETLB_PAGE_ORDER order which kcompactd tries to > maintain. > > The tunable is exposed through sysfs: > /sys/kernel/mm/compaction/proactiveness > > > Ping. Any comments/feedback? -Nitin --0000000000005b41e705a57bff33 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


=
On Tue, Apr 28, 2020 at 3:11 PM Nitin= Gupta <nigupta@nvidia.com>= wrote:
For some= applications, we need to allocate almost all memory as
hugepages. However, on a running system, higher-order allocations can
fail if the memory is fragmented. Linux kernel currently does on-demand
compaction as we request more hugepages, but this style of compaction
incurs very high latency. Experiments with one-time full memory
compaction (followed by hugepage allocations) show that kernel is able
to restore a highly fragmented memory state to a fairly compacted memory state within <1 sec for a 32G system. Such data suggests that a more
proactive compaction can help us allocate a large fraction of memory as
hugepages keeping allocation latencies low.

For a more proactive compaction, the approach taken here is to define
a new tunable called 'proactiveness' which dictates bounds for exte= rnal
fragmentation wrt HUGETLB_PAGE_ORDER order which kcompactd tries to
maintain.

The tunable is exposed through sysfs:
=C2=A0 /sys/kernel/mm/compaction/proactiveness


<snip>


<= div>Ping.

Any comments/feedback?
-Nitin

--0000000000005b41e705a57bff33--