DPDK-dev Archive on lore.kernel.org
 help / color / Atom feed
* Re: [dpdk-dev] [PATCH] eal: change max hugepage sizes to 4
@ 2019-08-07 12:47 Gagandeep Singh
  0 siblings, 0 replies; 20+ messages in thread
From: Gagandeep Singh @ 2019-08-07 12:47 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Thomas Monjalon, Burakov, Anatoly, Olivier Matz,
	Andrew Rybchenko, Nipun Gupta, Steve Capper



> 
> On Wed, Aug 7, 2019 at 12:26 PM Gagandeep Singh <g.singh@nxp.com> wrote:
> >
> > DPDK currently is supporting maximum 3 hugepage,
> > sizes whereas system can support more than this e.g.
> > 64K, 2M, 32M and 1G.
> 
> You can mention ARM platform here, and that this issue starts with
> kernel 5.2 (and I would try to mention this in the title as well).
> This is better than an annotation that will be lost.
> 
> 
> > Having these four hugepage sizes available to use by DPDK,
> > which is valid in case of '--in-memory' EAL option or
> > using 4 separate mount points for each hugepage size;
> > hugepage_info_init() API reports an error.
> 
> Can you describe what is the impact from a user point of view rather
> than mentioning this internal function?
> 
> 
> > This change increases the maximum supported mount points
> > to 4.
> 
> I suppose this fix does the trick for you.
> However, we are in internal structures and I can't think of an impact
> on datapath.
> So we might as well use dynamic allocations rather than just enlarge this array.
> 
> Did you consider this?
Yes, we have thought about it, but that would mean a lot more testing is required for all supported kernel or may be on some stacks as well.
MAX_HUGEPAGE_SIZES is set as a static value 3 since beginning while ARM (or may be some other platforms) is supporting 4 sizes since very long.

The value of this macro has not changed from long. It is just a mismatch between what DPDK is supporting and what underneath hardware is supporting.
This issue is coming now because in kernel 5.2, kernel is making the directories by default (and not taking from the bootargs) for each hugepage sizes.
Here are the possible cases that we are aware of

For 64KB granule, the kernel supports the following huge page sizes:
        2MB     using 32 x 64KB pages which are contiguous
        512MB   using a level 2 block mapping (a pmd_t)
        16GB    using 32 x 512MB block mappings

For a 16KB granule, we have:
        2MB     using 128 x 16KB pages
        32MB    using a level 2 block mapping (a pmd_t)
        1GB     using 32 x 32MB block mappings

For 4KB granule, we have:
        64KB    using 16 x 4KB pages
        2MB     using a level 2 block mapping (a pmd_t)
        32MB    using 16 x level 2 block mappings
        1GB     using a level 1 block mapping (a pud_t)

And using the static value of 4, it should cover all cases.

> 
> 
> --
> David Marchand

^ permalink raw reply	[flat|nested] 20+ messages in thread
* Re: [dpdk-dev] [PATCH] eal: change max hugepage sizes to 4
@ 2019-08-08  9:00 Gagandeep Singh
  2019-08-08  9:22 ` David Marchand
  0 siblings, 1 reply; 20+ messages in thread
From: Gagandeep Singh @ 2019-08-08  9:00 UTC (permalink / raw)
  To: Thomas Monjalon, Hemant Agrawal
  Cc: dev, David Marchand, Burakov, Anatoly, Olivier Matz,
	Andrew Rybchenko, Nipun Gupta, honnappa.nagarahalli,
	Steve Capper, jerinj, bruce.richardson, gavin.hu,
	konstantin.ananyev, drc

> >HI Thomas,
> > > > > DPDK currently is supporting maximum 3 hugepage, sizes whereas
> > > > > system can support more than this e.g.
> > > > > 64K, 2M, 32M and 1G.
> > > >
> > > > You can mention ARM platform here, and that this issue starts with
> > > > kernel 5.2 (and I would try to mention this in the title as well).
> > > > This is better than an annotation that will be lost.
> > > >
> > > >
> > > > > Having these four hugepage sizes available to use by DPDK, which is
> > > > > valid in case of '--in-memory' EAL option or using 4 separate mount
> > > > > points for each hugepage size;
> > > > > hugepage_info_init() API reports an error.
> > > >
> > > > Can you describe what is the impact from a user point of view rather
> > > > than mentioning this internal function?
> > >
> > > Yes please, we need to understand how much it is critical.
> > > Should we Cc stable@dpdk.org for backport?
> > > Should it be merged at the last minute in 19.08?
> >
> > VPP usages in-memory option. So, VPP on ARM with kernel 5.2 wont' work
> without this patch.
> 
> Do you want to send a v2 with a better explanation?
> 
> I would suggest to restrict the change to Arm only with an ifdef,
> in order to limit the risk for this release.
> We can think about a dynamic hugepage scan in the next release.
> 
Ok, I will send a v2 with better explanation and will also add an if check to make this change for ARM specific only.

^ permalink raw reply	[flat|nested] 20+ messages in thread
* Re: [dpdk-dev] [PATCH] eal: change max hugepage sizes to 4
@ 2019-08-07 12:53 Gagandeep Singh
  2019-08-07 14:09 ` Honnappa Nagarahalli
  0 siblings, 1 reply; 20+ messages in thread
From: Gagandeep Singh @ 2019-08-07 12:53 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: David Marchand, dev, Burakov,  Anatoly, Olivier Matz,
	Andrew Rybchenko, Nipun Gupta

Hi Thomas,

> 07/08/2019 14:00, David Marchand:
> > On Wed, Aug 7, 2019 at 12:26 PM Gagandeep Singh <g.singh@nxp.com>
> wrote:
> > >
> > > DPDK currently is supporting maximum 3 hugepage,
> > > sizes whereas system can support more than this e.g.
> > > 64K, 2M, 32M and 1G.
> >
> > You can mention ARM platform here, and that this issue starts with
> > kernel 5.2 (and I would try to mention this in the title as well).
> > This is better than an annotation that will be lost.
> >
> >
> > > Having these four hugepage sizes available to use by DPDK,
> > > which is valid in case of '--in-memory' EAL option or
> > > using 4 separate mount points for each hugepage size;
> > > hugepage_info_init() API reports an error.
> >
> > Can you describe what is the impact from a user point of view rather
> > than mentioning this internal function?
> 
> Yes please, we need to understand how much it is critical.

It is critical for stacks like VPP which uses by default in-memory.
There stack would be broken on kernel 5.2 and above.
I will change the description to make it clear. 

> Should we Cc stable@dpdk.org for backport?
yes

> Should it be merged at the last minute in 19.08?
If this is the only change, that we need to do (i.e. no dynamic allocation), then yes it can go.


^ permalink raw reply	[flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH] eal: change max hugepage sizes to 4
@ 2019-08-07 10:12 Gagandeep Singh
  2019-08-07 12:00 ` David Marchand
  2019-08-07 15:27 ` Stephen Hemminger
  0 siblings, 2 replies; 20+ messages in thread
From: Gagandeep Singh @ 2019-08-07 10:12 UTC (permalink / raw)
  To: dev, thomas
  Cc: anatoly.burakov, olivier.matz, arybchenko, Gagandeep Singh, Nipun Gupta

DPDK currently is supporting maximum 3 hugepage,
sizes whereas system can support more than this e.g.
64K, 2M, 32M and 1G.

Having these four hugepage sizes available to use by DPDK,
which is valid in case of '--in-memory' EAL option or
using 4 separate mount points for each hugepage size;
hugepage_info_init() API reports an error.

This change increases the maximum supported mount points
to 4.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---

On ARM platform when the translation granule is 4KB,
there are 4 hugepage sizes supported.
On kernel 5.2, we can see below directories in
/sys/kernel/mm/hugepages:

hugepages-1048576kB
hugepages-2048kB
hugepages-32768kB
hugepages-64kB

 lib/librte_eal/common/eal_internal_cfg.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/librte_eal/common/eal_internal_cfg.h b/lib/librte_eal/common/eal_internal_cfg.h
index edff09d07..abb7ec913 100644
--- a/lib/librte_eal/common/eal_internal_cfg.h
+++ b/lib/librte_eal/common/eal_internal_cfg.h
@@ -15,7 +15,7 @@
 
 #include "eal_thread.h"
 
-#define MAX_HUGEPAGE_SIZES 3  /**< support up to 3 page sizes */
+#define MAX_HUGEPAGE_SIZES 4  /**< support up to 4 page sizes */
 
 /*
  * internal configuration structure for the number, size and
-- 
2.17.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, back to index

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-07 12:47 [dpdk-dev] [PATCH] eal: change max hugepage sizes to 4 Gagandeep Singh
  -- strict thread matches above, loose matches on Subject: below --
2019-08-08  9:00 Gagandeep Singh
2019-08-08  9:22 ` David Marchand
2019-08-07 12:53 Gagandeep Singh
2019-08-07 14:09 ` Honnappa Nagarahalli
2019-08-07 10:12 Gagandeep Singh
2019-08-07 12:00 ` David Marchand
2019-08-07 12:07   ` Thomas Monjalon
2019-08-07 13:28     ` Hemant Agrawal
2019-08-08  7:31       ` Thomas Monjalon
2019-08-12  9:43         ` Burakov, Anatoly
2019-08-12  9:49           ` David Marchand
2019-08-12 10:01             ` Thomas Monjalon
2019-08-12 10:38             ` Burakov, Anatoly
2019-08-08  7:33       ` David Marchand
2019-08-08 10:37         ` Hemant Agrawal
2019-08-08 12:29           ` Steve Capper
2019-08-08 12:39             ` David Marchand
2019-08-12  9:42       ` Burakov, Anatoly
2019-08-07 15:27 ` Stephen Hemminger

DPDK-dev Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/dpdk-dev/0 dpdk-dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dpdk-dev dpdk-dev/ https://lore.kernel.org/dpdk-dev \
		dev@dpdk.org
	public-inbox-index dpdk-dev

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git