fio.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vincent Fu <vincent.fu@samsung.com>
To: "axboe@kernel.dk" <axboe@kernel.dk>,
	"fio@vger.kernel.org" <fio@vger.kernel.org>
Cc: Vincent Fu <vincent.fu@samsung.com>
Subject: [PATCH v2 5/5] docs: update discussion of huge page sizes
Date: Tue, 24 May 2022 14:23:24 +0000	[thread overview]
Message-ID: <20220524142229.135808-6-vincent.fu@samsung.com> (raw)
In-Reply-To: <20220524142229.135808-1-vincent.fu@samsung.com>

Note that the default huge page size is either 2 or 4 MiB, depending on
the platform. Also mention /sys/kernel/mm/hugepages/ as another place to
see the supported huge page sizes.

Signed-off-by: Vincent Fu <vincent.fu@samsung.com>
---
 HOWTO.rst | 25 ++++++++++++++-----------
 fio.1     | 19 ++++++++++---------
 2 files changed, 24 insertions(+), 20 deletions(-)

diff --git a/HOWTO.rst b/HOWTO.rst
index eee386c1..58d02fa2 100644
--- a/HOWTO.rst
+++ b/HOWTO.rst
@@ -1823,13 +1823,14 @@ Buffers and memory
 	**mmaphuge** to work, the system must have free huge pages allocated. This
 	can normally be checked and set by reading/writing
 	:file:`/proc/sys/vm/nr_hugepages` on a Linux system. Fio assumes a huge page
-	is 4MiB in size. So to calculate the number of huge pages you need for a
-	given job file, add up the I/O depth of all jobs (normally one unless
-	:option:`iodepth` is used) and multiply by the maximum bs set. Then divide
-	that number by the huge page size. You can see the size of the huge pages in
-	:file:`/proc/meminfo`. If no huge pages are allocated by having a non-zero
-	number in `nr_hugepages`, using **mmaphuge** or **shmhuge** will fail. Also
-	see :option:`hugepage-size`.
+        is 2 or 4MiB in size depending on the platform. So to calculate the
+        number of huge pages you need for a given job file, add up the I/O
+        depth of all jobs (normally one unless :option:`iodepth` is used) and
+        multiply by the maximum bs set. Then divide that number by the huge
+        page size. You can see the size of the huge pages in
+        :file:`/proc/meminfo`. If no huge pages are allocated by having a
+        non-zero number in `nr_hugepages`, using **mmaphuge** or **shmhuge**
+        will fail. Also see :option:`hugepage-size`.
 
 	**mmaphuge** also needs to have hugetlbfs mounted and the file location
 	should point there. So if it's mounted in :file:`/huge`, you would use
@@ -1848,10 +1849,12 @@ Buffers and memory
 
 .. option:: hugepage-size=int
 
-	Defines the size of a huge page. Must at least be equal to the system
-	setting, see :file:`/proc/meminfo`. Defaults to 4MiB.  Should probably
-	always be a multiple of megabytes, so using ``hugepage-size=Xm`` is the
-	preferred way to set this to avoid setting a non-pow-2 bad value.
+        Defines the size of a huge page. Must at least be equal to the system
+        setting, see :file:`/proc/meminfo` and
+        :file:`/sys/kernel/mm/hugepages/`. Defaults to 2 or 4MiB depending on
+        the platform.  Should probably always be a multiple of megabytes, so
+        using ``hugepage-size=Xm`` is the preferred way to set this to avoid
+        setting a non-pow-2 bad value.
 
 .. option:: lockmem=int
 
diff --git a/fio.1 b/fio.1
index ded7bbfc..5f057574 100644
--- a/fio.1
+++ b/fio.1
@@ -1631,11 +1631,11 @@ multiplied by the I/O depth given. Note that for \fBshmhuge\fR and
 \fBmmaphuge\fR to work, the system must have free huge pages allocated. This
 can normally be checked and set by reading/writing
 `/proc/sys/vm/nr_hugepages' on a Linux system. Fio assumes a huge page
-is 4MiB in size. So to calculate the number of huge pages you need for a
-given job file, add up the I/O depth of all jobs (normally one unless
-\fBiodepth\fR is used) and multiply by the maximum bs set. Then divide
-that number by the huge page size. You can see the size of the huge pages in
-`/proc/meminfo'. If no huge pages are allocated by having a non-zero
+is 2 or 4MiB in size depending on the platform. So to calculate the number of
+huge pages you need for a given job file, add up the I/O depth of all jobs
+(normally one unless \fBiodepth\fR is used) and multiply by the maximum bs set.
+Then divide that number by the huge page size. You can see the size of the huge
+pages in `/proc/meminfo'. If no huge pages are allocated by having a non-zero
 number in `nr_hugepages', using \fBmmaphuge\fR or \fBshmhuge\fR will fail. Also
 see \fBhugepage\-size\fR.
 .P
@@ -1655,10 +1655,11 @@ of subsequent I/O memory buffers is the sum of the \fBiomem_align\fR and
 \fBbs\fR used.
 .TP
 .BI hugepage\-size \fR=\fPint
-Defines the size of a huge page. Must at least be equal to the system
-setting, see `/proc/meminfo'. Defaults to 4MiB. Should probably
-always be a multiple of megabytes, so using `hugepage\-size=Xm' is the
-preferred way to set this to avoid setting a non-pow-2 bad value.
+Defines the size of a huge page. Must at least be equal to the system setting,
+see `/proc/meminfo' and `/sys/kernel/mm/hugepages/'. Defaults to 2 or 4MiB
+depending on the platform. Should probably always be a multiple of megabytes,
+so using `hugepage\-size=Xm' is the preferred way to set this to avoid setting
+a non-pow-2 bad value.
 .TP
 .BI lockmem \fR=\fPint
 Pin the specified amount of memory with \fBmlock\fR\|(2). Can be used to
-- 
2.25.1

  parent reply	other threads:[~2022-05-24 14:23 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CGME20220524142325uscas1p197b28727cdb38d34f866b97b0a1932e5@uscas1p1.samsung.com>
2022-05-24 14:23 ` [PATCH v2 0/5] (now) five cleanups Vincent Fu
     [not found]   ` <CGME20220524142325uscas1p139f830316e1e51e34a35e1fa64923b2d@uscas1p1.samsung.com>
2022-05-24 14:23     ` [PATCH v2 2/5] configure: refer to zlib1g-dev package for zlib support Vincent Fu
     [not found]   ` <CGME20220524142325uscas1p1ac4d3543301cc43d0e5fa8dce8882dcb@uscas1p1.samsung.com>
2022-05-24 14:23     ` [PATCH v2 1/5] steadystate: delete incorrect comment Vincent Fu
     [not found]   ` <CGME20220524142325uscas1p269230921c0498a1c6205a132440d2f30@uscas1p2.samsung.com>
2022-05-24 14:23     ` [PATCH v2 3/5] HOWTO: add blank line for prettier formatting Vincent Fu
     [not found]   ` <CGME20220524142326uscas1p27c8cd9fd4f344840227201790f922bb5@uscas1p2.samsung.com>
2022-05-24 14:23     ` Vincent Fu [this message]
     [not found]   ` <CGME20220524142326uscas1p130d78e36a5aa36964068cde26412f4ae@uscas1p1.samsung.com>
2022-05-24 14:23     ` [PATCH v2 4/5] t/run-fio-tests: improve json data decoding Vincent Fu
2022-05-28  2:37   ` [PATCH v2 0/5] (now) five cleanups Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220524142229.135808-6-vincent.fu@samsung.com \
    --to=vincent.fu@samsung.com \
    --cc=axboe@kernel.dk \
    --cc=fio@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).