linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Xing Zhengjun <zhengjun.xing@linux.intel.com>
Cc: linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>, Tony <tony.luck@intel.com>,
	Tim C Chen <tim.c.chen@intel.com>,
	"Huang, Ying" <ying.huang@intel.com>,
	"Du, Julie" <julie.du@intel.com>
Subject: Re: Test report for kernel direct mapping performance
Date: Tue, 26 Jan 2021 16:00:16 +0100	[thread overview]
Message-ID: <20210126150016.GT827@dhcp22.suse.cz> (raw)
In-Reply-To: <213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com>

On Fri 15-01-21 15:23:07, Xing Zhengjun wrote:
> Hi,
> 
> There is currently a bit of a debate about the kernel direct map. Does using
> 2M/1G pages aggressively for the kernel direct map help performance? Or, is
> it an old optimization which is not as helpful on modern CPUs as it was in
> the old days? What is the penalty of a kernel feature that heavily demotes
> this mapping from larger to smaller pages? We did a set of runs with 1G and
> 2M pages enabled /disabled and saw the changes.
> 
> [Conclusions]
> 
> Assuming that this was a good representative set of workloads and that the
> data are good, for server usage, we conclude that the existing aggressive
> use of 1G mappings is a good choice since it represents the best in a
> plurality of the workloads. However, in a *majority* of cases, another
> mapping size (2M or 4k) potentially offers a performance improvement. This
> leads us to conclude that although 1G mappings are a good default choice,
> there is no compelling evidence that it must be the only choice, or that
> folks deriving benefits (like hardening) from smaller mapping sizes should
> avoid the smaller mapping sizes.

Thanks for conducting these tests! This is definitely useful and quite
honestly I would have expected a much more noticeable differences.
Please note that I am not really deep into benchmarking but one thing
that popped in my mind was whethere these (micro)benchmarks are really
representative workloads. Some of them tend to be rather narrow in
executed code paths or data structures used AFAIU. Is it possible they
simply didn't generate sufficient TLB pressure?

Have you tried to look closer on profiles of respective configurations
where the overhead comes from?
-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2021-01-26 15:00 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-15  7:23 Test report for kernel direct mapping performance Xing Zhengjun
2021-01-26 15:00 ` Michal Hocko [this message]
2021-01-27  7:50   ` Xing Zhengjun

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210126150016.GT827@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=dave.hansen@intel.com \
    --cc=julie.du@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=tim.c.chen@intel.com \
    --cc=tony.luck@intel.com \
    --cc=ying.huang@intel.com \
    --cc=zhengjun.xing@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).